Model Building

We recommend using tools like Spektral to build your model. That said, here are some utilities that we found useful

Masks

menten_gcn.util.make_and_apply_node_mask(X: tensorflow.python.keras.engine.base_layer.Layer, A: tensorflow.python.keras.engine.base_layer.Layer)tensorflow.python.keras.engine.base_layer.Layer[source]

Sometimes you want to zero-out rows of your X layer that are not currently populated with a node. This method applies a mask from the Adjaceny matrix to do that.

Parameters
  • X (layer) – Node features

  • A (layer) – Adjaceny matrix

Returns

- keras layer

menten_gcn.util.make_and_apply_edge_mask(E: tensorflow.python.keras.engine.base_layer.Layer, A: tensorflow.python.keras.engine.base_layer.Layer)tensorflow.python.keras.engine.base_layer.Layer[source]

Sometimes you want to zero-out elements of your E layer that are not currently populated with an edge. This method applies a mask from the Adjaceny matrix to do that.

Parameters
  • E (layer) – Edge features

  • A (layer) – Adjaceny matrix

Returns

- keras layer

Example:

# Setup
X_in, A_in, E_in = data_maker.generate_XAE_input_tensors()
X = X_in
A = A_in
E = E_in

# Phase 1 - preprocess
for i in [128,64]:
    # Conv1D and Conv2D are keras layers
    X = Conv1D( i, kernel_size=1, activation='relu' )( X )
    E = Conv2D( i, kernel_size=1, activation='relu' )( E )
X = menten_gcn.util.make_and_apply_node_mask( X, A )
E = menten_gcn.util.make_and_apply_edge_mask( E, A )

# continue with more layers...
menten_gcn.util.make_node_mask(A: tensorflow.python.keras.engine.base_layer.Layer)tensorflow.python.keras.engine.base_layer.Layer[source]

Create a node mask here, then re-use it many times in the future

Parameters

A (layer) – Adjaceny matrix

Returns

- keras layer

menten_gcn.util.apply_node_mask(X: tensorflow.python.keras.engine.base_layer.Layer, X_mask: tensorflow.python.keras.engine.base_layer.Layer)tensorflow.python.keras.engine.base_layer.Layer[source]

Apply a mask that you’ve already made

Parameters
  • X (layer) – Node features

  • X_mask (layer) – Mask created by make_node_mask

Returns

- keras layer

menten_gcn.util.make_edge_mask(A: tensorflow.python.keras.engine.base_layer.Layer)tensorflow.python.keras.engine.base_layer.Layer[source]

Create an edge mask here, then re-use it many times in the future

Parameters

A (layer) – Adjaceny matrix

Returns

- keras layer

menten_gcn.util.apply_edge_mask(E: tensorflow.python.keras.engine.base_layer.Layer, E_mask: tensorflow.python.keras.engine.base_layer.Layer)tensorflow.python.keras.engine.base_layer.Layer[source]

Apply a mask that you’ve already made

Parameters
  • E (layer) – Edge features

  • E_mask (layer) – Mask created by make_edge_mask

Returns

- keras layer

Convolutions

menten_gcn.playground.make_NENE_XE_conv(X: tensorflow.python.keras.engine.base_layer.Layer, A: tensorflow.python.keras.engine.base_layer.Layer, E: tensorflow.python.keras.engine.base_layer.Layer, Tnfeatures: list, Xnfeatures: int, Enfeatures: int, Xactivation='relu', Eactivation='relu', attention: bool = False, apply_T_to_E: bool = False, E_mask=None, X_mask=None)Tuple[tensorflow.python.keras.engine.base_layer.Layer, tensorflow.python.keras.engine.base_layer.Layer][source]

We find that current GCN layers undervalue the Edge tensors. Not only does this layer use them as input, it also updates the values of Edge tensors.

Disclaimer: this isn’t actually a layer at the moment. It’s a method that hacks layers together and returns the result.

Parameters
  • X (layer) – Node features

  • A (layer) – Adjaceny matrix

  • E (layer) – Edge features

  • Tnfeatures (list of ints) – How large should each intermediate layer be? The length of this list determines the number of intermediate layers.

  • Xnfeatures (int) – How many features do you want each node to end up with?

  • Enfeatures (int) – How many features do you want each edge to end up with?

  • Xactivation – Which activation function should be applied to the final X?

  • Eactivation – Which activation function should be applied to the final E?

  • attention (bool) – Should we apply attention weights to the sum operations?

  • apply_T_to_E (bool) – Should the input to the final E conv be the Temp tensor or the initial NENE? Feel free to just use the default if that question makes no sense

  • E_mask (layer) – If you already made an edge mask, feel free to pass it here to save us time.

  • X_mask (layer) – If you already made a node mask, feel free to pass it here to save us time.

Returns

  • - keras layer which is the new X

  • - keras layer which is the new E

menten_gcn.playground.make_NEENEENEE_XE_conv(X: tensorflow.python.keras.engine.base_layer.Layer, A: tensorflow.python.keras.engine.base_layer.Layer, E: tensorflow.python.keras.engine.base_layer.Layer, Tnfeatures: list, Xnfeatures: int, Enfeatures: int, Xactivation='relu', Eactivation='relu', attention: bool = False, E_mask=None, X_mask=None)Tuple[tensorflow.python.keras.engine.base_layer.Layer, tensorflow.python.keras.engine.base_layer.Layer][source]

Same idea as make_NENE_XE_conv but considers all possible 3-body interactions. Warning: this will use a ton of memory if your graph is large.

Disclaimer: this isn’t actually a layer at the moment. It’s a method that hacks layers together and returns the result.

Parameters
  • X (layer) – Node features

  • A (layer) – Adjaceny matrix

  • E (layer) – Edge features

  • Tnfeatures (list of ints) – This time, you get to decide the number of middle layers. Make this list as long as you want

  • Xnfeatures (int) – How many features do you want each node to end up with?

  • Enfeatures (int) – How many features do you want each edge to end up with?

  • Xactivation – Which activation function should be applied to the final X?

  • Eactivation – Which activation function should be applied to the final E?

  • attention (bool) – Should we apply attention weights to the sum operations?

  • E_mask (layer) – If you already made an edge mask, feel free to pass it here to save us time.

  • X_mask (layer) – If you already made a node mask, feel free to pass it here to save us time.

Returns

  • - keras layer which is the new X

  • - keras layer which is the new E