ot.gnn

Layers and functions for optimal transport in Graph Neural Networks.

Warning

Note that by default the module is not imported in ot. In order to use it you need to explicitly import ot.gnn. This module is PyTorch Geometric dependent. The layers are compatible with their API.

ot.gnn.FGW_distance_to_templates(G_edges, tplt_adjacencies, G_features, tplt_features, tplt_weights, alpha=0.5, multi_alpha=False, batch=None)[source]

Computes the FGW distances between a graph and templates.

Parameters:
  • G_edges (torch.Tensor, shape (n_edges, 2)) – Edge indices of the graph in the Pytorch Geometric format.

  • tplt_adjacencies (list of torch.Tensor, shape (n_templates, n_template_nodes, n_templates_nodes)) – List of the adjacency matrices of the templates.

  • G_features (torch.Tensor, shape (n_nodes, n_features)) – Graph node features.

  • tplt_features (list of torch.Tensor, shape (n_templates, n_template_nodes, n_features)) – List of the node features of the templates.

  • weights (torch.Tensor, shape (n_templates, n_template_nodes)) – Weights on the nodes of the templates.

  • alpha (float, optional) – Trade-off parameter (0 < alpha < 1). Weights features (alpha=0) and structure (alpha=1).

  • multi_alpha (bool, optional) – If True, the alpha parameter is a vector of size n_templates.

  • batch (torch.Tensor, optional) – Batch vector which assigns each node to its graph.

Returns:

distances – Vector of fused Gromov-Wasserstein distances between the graph and the templates.

Return type:

torch.Tensor, shape (n_templates) if batch=None, else shape (n_graphs, n_templates).

class ot.gnn.TFGWPooling(n_features, n_tplt=2, n_tplt_nodes=2, alpha=None, train_node_weights=True, multi_alpha=False, feature_init_mean=0.0, feature_init_std=1.0)[source]
Template Fused Gromov-Wasserstein (TFGW) layer. This layer is a pooling layer for graph neural networks.

Computes the fused Gromov-Wasserstein distances between the graph and a set of templates.

\[TFGW_{ \overline{ \mathcal{G} }, \alpha }(C,F,h)=[ FGW_{\alpha}(C,F,h,\overline{C}_k,\overline{F}_k,\overline{h}_k)]_{k=1}^{K}\]

where :

  • \(\mathcal{G}=\{(\overline{C}_k,\overline{F}_k,\overline{h}_k) \}_{k \in \{1,...,K \}} \}\) is the set of \(K\) templates characterized by their adjacency matrices \(\overline{C}_k\), their feature matrices \(\overline{F}_k\) and their node weights \(\overline{h}_k\).

  • \(C\), \(F\) and \(h\) are respectively the adjacency matrix, the feature matrix and the node weights of the graph.

  • \(\alpha\) is the trade-off parameter between features and structure for the Fused Gromov-Wasserstein distance.

Parameters:
  • n_features (int) – Feature dimension of the nodes.

  • n_tplt (int) – Number of graph templates.

  • n_tplt_nodes (int) – Number of nodes in each template.

  • alpha (float, optional) – FGW trade-off parameter (0 < alpha < 1). If None alpha is trained, else it is fixed at the given value. Weights features (alpha=0) and structure (alpha=1).

  • train_node_weights (bool, optional) – If True, the templates node weights are learned. Else, they are uniform.

  • multi_alpha (bool, optional) – If True, the alpha parameter is a vector of size n_tplt.

  • feature_init_mean (float, optional) – Mean of the random normal law to initialize the template features.

  • feature_init_std (float, optional) – Standard deviation of the random normal law to initialize the template features.

References

forward(x, edge_index, batch=None)[source]
Parameters:
class ot.gnn.TWPooling(n_features, n_tplt=2, n_tplt_nodes=2, train_node_weights=True, feature_init_mean=0.0, feature_init_std=1.0)[source]
Template Wasserstein (TW) layer, also known as OT-GNN layer. This layer is a pooling layer for graph neural networks.

Computes the Wasserstein distances between the features of the graph features and a set of templates.

\[TW_{\overline{\mathcal{G}}}(C,F,h)=[W(F,h,\overline{F}_k,\overline{h}_k)]_{k=1}^{K}\]

where :

  • \(\mathcal{G}=\{(\overline{F}_k,\overline{h}_k) \}_{k \in \{1,...,K \}} \}\) is the set of \(K\) templates characterized by their feature matrices \(\overline{F}_k\) and their node weights \(\overline{h}_k\).

  • \(F\) and \(h\) are respectively the feature matrix and the node weights of the graph.

Parameters:
  • n_features (int) – Feature dimension of the nodes.

  • n_tplt (int) – Number of graph templates.

  • n_tplt_nodes (int) – Number of nodes in each template.

  • train_node_weights (bool, optional) – If True, the templates node weights are learned. Else, they are uniform.

  • feature_init_mean (float, optional) – Mean of the random normal law to initialize the template features.

  • feature_init_std (float, optional) – Standard deviation of the random normal law to initialize the template features.

References

forward(x, edge_index=None, batch=None)[source]
Parameters:
ot.gnn.wasserstein_distance_to_templates(G_features, tplt_features, tplt_weights, batch=None)[source]

Computes the Wasserstein distances between a graph and graph templates.

Parameters:
  • G_features (torch.Tensor, shape (n_nodes, n_features)) – Node features of the graph.

  • tplt_features (list of torch.Tensor, shape (n_templates, n_template_nodes, n_features)) – List of the node features of the templates.

  • weights (torch.Tensor, shape (n_templates, n_template_nodes)) – Weights on the nodes of the templates.

  • batch (torch.Tensor, optional) – Batch vector which assigns each node to its graph.

Returns:

distances – Vector of Wasserstein distances between the graph and the templates.

Return type:

torch.Tensor, shape (n_templates) if batch=None, else shape (n_graphs, n_templates)