Models
AtomicGraphNets.build_CGCNN
— FunctionBuild a model of the architecture introduced in the Xie and Grossman 2018 paper: https://arxiv.org/abs/1710.10324
Input to the resulting model is a FeaturedGraph with feature matrix with input_feature_length
rows and one column for each node in the input graph.
Network has convolution layers, then pooling to some fixed length, followed by Dense layers leading to output.
Arguments
input_feature_length::Integer
: length of feature vector at each nodenum_conv::Integer
: number of convolutional layersconv_activation::F
: activation function on convolutional layersatom_conv_feature_length::Integer
: length of output of conv layerspool_type::String
: type of pooling after convolution (mean or max)pool_width::Float
: fraction of atomconvfeature_length that pooling window should spanpooled_feature_length::Integer
: feature length to pool down tonum_hidden_layers::Integer
: how many Dense layers before output? Note that if this is set to 1 there will be no nonlinearity imposed on these layershidden_layer_activation::F
: activation function on hidden layersoutput_layer_activation::F
: activation function on output layer; should generally be identity for regression and something that normalizes appropriately (e.g. softmax) for classificationoutput_length::Integer
: length of output vectorinitW::F
: function to use to initialize weights in trainable layers
AtomicGraphNets.build_SGCNN
— FunctionBuild a slab graph network, based off of Kim et al. 2020: https://pubs.acs.org/doi/10.1021/acs.chemmater.9b03686
For now, fixing both "parallel" convolutional paths to have same hyperparams for simplicity and to basically be copies of CGCNN. Might relax this later.
Arguments:
Same as build_CGCNN
except for additional parameter of hidden_layer_width