Models

AtomicGraphNets.build_CGCNNFunction

Build a model of the architecture introduced in the Xie and Grossman 2018 paper: https://arxiv.org/abs/1710.10324

Input to the resulting model is a FeaturedGraph with feature matrix with input_feature_length rows and one column for each node in the input graph.

Network has convolution layers, then pooling to some fixed length, followed by Dense layers leading to output.

Arguments

  • input_feature_length::Integer: length of feature vector at each node
  • num_conv::Integer: number of convolutional layers
  • conv_activation::F: activation function on convolutional layers
  • atom_conv_feature_length::Integer: length of output of conv layers
  • pool_type::String: type of pooling after convolution (mean or max)
  • pool_width::Float: fraction of atomconvfeature_length that pooling window should span
  • pooled_feature_length::Integer: feature length to pool down to
  • num_hidden_layers::Integer: how many Dense layers before output? Note that if this is set to 1 there will be no nonlinearity imposed on these layers
  • hidden_layer_activation::F: activation function on hidden layers
  • output_layer_activation::F: activation function on output layer; should generally be identity for regression and something that normalizes appropriately (e.g. softmax) for classification
  • output_length::Integer: length of output vector
  • initW::F: function to use to initialize weights in trainable layers
source
AtomicGraphNets.build_SGCNNFunction

Build a slab graph network, based off of Kim et al. 2020: https://pubs.acs.org/doi/10.1021/acs.chemmater.9b03686

For now, fixing both "parallel" convolutional paths to have same hyperparams for simplicity and to basically be copies of CGCNN. Might relax this later.

Arguments:

Same as build_CGCNN except for additional parameter of hidden_layer_width

source