# ot.gromov

Solvers related to Gromov-Wasserstein problems.

ot.gromov.GW_distance_estimation(C1, C2, p, q, loss_fun, T, nb_samples_p=None, nb_samples_q=None, std=True, random_state=None)[source]

Returns an approximation of the Gromov-Wasserstein loss between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$ with a fixed transport plan $$\mathbf{T}$$. To recover an approximation of the Gromov-Wasserstein distance as defined in  compute $$d_{GW} = \frac{1}{2} \sqrt{\mathbf{GW}}$$.

The function gives an unbiased approximation of the following equation:

$\mathbf{GW} = \sum_{i,j,k,l} L(\mathbf{C_{1}}_{i,k}, \mathbf{C_{2}}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}$

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• L : Loss function to account for the misfit between the similarity matrices

• $$\mathbf{T}$$: Matrix with marginal $$\mathbf{p}$$ and $$\mathbf{q}$$

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,)) – Distribution in the source space

• q (array-like, shape (nt,)) – Distribution in the target space

• loss_fun (function: $$\mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}$$) – Loss function used for the distance, the transport plan does not depend on the loss function

• T (csr or array-like, shape (ns, nt)) – Transport plan matrix, either a sparse csr or a dense matrix

• nb_samples_p (int, optional) – nb_samples_p is the number of samples (without replacement) along the first dimension of $$\mathbf{T}$$

• nb_samples_q (int, optional) – nb_samples_q is the number of samples along the second dimension of $$\mathbf{T}$$, for each sample along the first

• std (bool, optional) – Standard deviation associated with the prediction of the gromov-wasserstein cost

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

Returns:

Gromov-wasserstein cost

Return type:

float

References

ot.gromov.entropic_fused_gromov_barycenters(N, Ys, Cs, ps=None, p=None, lambdas=None, loss_fun='square_loss', epsilon=0.1, symmetric=True, alpha=0.5, max_iter=1000, tol=1e-09, warmstartT=False, verbose=False, log=False, init_C=None, init_Y=None, random_state=None, **kwargs)[source]

Returns the Fused Gromov-Wasserstein barycenters of S measurable networks with node features $$(\mathbf{C}_s, \mathbf{Y}_s, \mathbf{p}_s)_{1 \leq s \leq S}$$ estimated using Fused Gromov-Wasserstein transports from Sinkhorn projections.

The function solves the following optimization problem:

$\mathbf{C}^*, \mathbf{Y}^* = \mathop{\arg \min}_{\mathbf{C}\in \mathbb{R}^{N \times N}, \mathbf{Y}\in \mathbb{Y}^{N \times d}} \quad \sum_s \lambda_s \mathrm{FGW}_{\alpha}(\mathbf{C}, \mathbf{C}_s, \mathbf{Y}, \mathbf{Y}_s, \mathbf{p}, \mathbf{p}_s)$

Where :

• $$\mathbf{Y}_s$$: feature matrix

• $$\mathbf{C}_s$$: metric cost matrix

• $$\mathbf{p}_s$$: distribution

Parameters:
• N (int) – Size of the targeted barycenter

• Ys (list of array-like, each element has shape (ns,d)) – Features of all samples

• Cs (list of S array-like of shape (ns,ns)) – Metric cost matrices

• ps (list of S array-like of shape (ns,), optional) – Sample weights in the S spaces. If let to its default value None, uniform distributions are taken.

• p (array-like, shape (N,), optional) – Weights in the targeted barycenter. If let to its default value None, uniform distribution is taken.

• lambdas (list of float, optional) – List of the S spaces’ weights. If let to its default value None, uniform weights are taken.

• loss_fun (callable, optional) – tensor-matrix multiplication function based on specific loss function

• epsilon (float, optional) – Regularization term >0

• symmetric (bool, optional.) – Either structures are to be assumed symmetric or not. Default value is True. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error (>0)

• warmstartT (bool, optional) – Either to perform warmstart of transport plans in the successive fused gromov-wasserstein transport problems.

• verbose (bool, optional) – Print information along iterations.

• log (bool, optional) – Record log if True.

• init_C (bool | array-like, shape (N, N)) – Random initial value for the $$\mathbf{C}$$ matrix provided by user.

• init_Y (array-like, shape (N,d), optional) – Initialization for the barycenters’ features. If not set a random init is used.

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

• **kwargs (dict) – parameters can be directly passed to the ot.entropic_fused_gromov_wasserstein solver.

Returns:

• Y (array-like, shape (N, d)) – Feature matrix in the barycenter space (permutated arbitrarily)

• C (array-like, shape (N, N)) – Similarity matrix in the barycenter space (permutated as Y’s rows)

• log (dict) – Log dictionary of error during iterations. Return only if log=True in parameters.

References

ot.gromov.entropic_fused_gromov_wasserstein(M, C1, C2, p=None, q=None, loss_fun='square_loss', epsilon=0.1, symmetric=None, alpha=0.5, G0=None, max_iter=1000, tol=1e-09, solver='PGD', warmstart=False, verbose=False, log=False, **kwargs)[source]

Returns the Fused Gromov-Wasserstein transport between $$(\mathbf{C_1}, \mathbf{Y_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{Y_2}, \mathbf{q})$$ with pairwise distance matrix $$\mathbf{M}$$ between node feature matrices $$\mathbf{Y_1}$$ and $$\mathbf{Y_2}$$, estimated using Sinkhorn projections.

If solver=”PGD”, the function solves the following entropic-regularized Fused Gromov-Wasserstein optimization problem using Projected Gradient Descent :

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg\min}_\mathbf{T} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l} - \epsilon H(\mathbf{T})\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Else if solver=”PPA”, the function solves the following Fused Gromov-Wasserstein optimization problem using Proximal Point Algorithm :

\begin{align}\begin{aligned}\mathbf{T}^* \in\mathop{\arg\min}_\mathbf{T} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{M}$$: metric cost matrix between features across domains

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity and feature matrices

• H: entropy

• $$\alpha$$: trade-off parameter

Note

If the inner solver ot.sinkhorn did not convergence, the optimal coupling $$\mathbf{T}$$ returned by this function does not necessarily satisfy the marginal constraints $$\mathbf{T}\mathbf{1}=\mathbf{p}$$ and $$\mathbf{T}^T\mathbf{1}=\mathbf{q}$$. So the returned Fused Gromov-Wasserstein loss does not necessarily satisfy distance properties and may be negative.

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (string, optional) – Loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• epsilon (float, optional) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 will be used as initial transport of the solver. G0 is not required to satisfy marginal constraints but we strongly recommand it to correcly estimate the GW distance.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error (>0)

• solver (string, optional) – Solver to use either ‘PGD’ for Projected Gradient Descent or ‘PPA’ for Proximal Point Algorithm. Default value is ‘PGD’.

• warmstart (bool, optional) – Either to perform warmstart of dual potentials in the successive Sinkhorn projections.

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – Record log if True.

• **kwargs (dict) – parameters can be directly passed to the ot.sinkhorn solver. Such as numItermax and stopThr to control its estimation precision, e.g  suggests to use numItermax=1.

Returns:

T – Optimal coupling between the two joint spaces

Return type:

array-like, shape (ns, nt)

References

ot.gromov.entropic_fused_gromov_wasserstein2(M, C1, C2, p=None, q=None, loss_fun='square_loss', epsilon=0.1, symmetric=None, alpha=0.5, G0=None, max_iter=1000, tol=1e-09, solver='PGD', warmstart=False, verbose=False, log=False, **kwargs)[source]

Returns the Fused Gromov-Wasserstein distance between $$(\mathbf{C_1}, \mathbf{Y_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{Y_2}, \mathbf{q})$$ with pairwise distance matrix $$\mathbf{M}$$ between node feature matrices $$\mathbf{Y_1}$$ and $$\mathbf{Y_2}$$, estimated using Sinkhorn projections.

If solver=”PGD”, the function solves the following entropic-regularized Fused Gromov-Wasserstein optimization problem using Projected Gradient Descent :

\begin{align}\begin{aligned}\mathbf{FGW} = \mathop{\min}_\mathbf{T} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l} - \epsilon H(\mathbf{T})\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Else if solver=”PPA”, the function solves the following Fused Gromov-Wasserstein optimization problem using Proximal Point Algorithm :

\begin{align}\begin{aligned}\mathbf{FGW} = \mathop{\min}_\mathbf{T} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{M}$$: metric cost matrix between features across domains

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity and feature matrices

• H: entropy

• $$\alpha$$: trade-off parameter

Note

If the inner solver ot.sinkhorn did not convergence, the optimal coupling $$\mathbf{T}$$ returned by this function does not necessarily satisfy the marginal constraints $$\mathbf{T}\mathbf{1}=\mathbf{p}$$ and $$\mathbf{T}^T\mathbf{1}=\mathbf{q}$$. So the returned Fused Gromov-Wasserstein loss does not necessarily satisfy distance properties and may be negative.

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (string, optional) – Loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• epsilon (float, optional) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 will be used as initial transport of the solver. G0 is not required to satisfy marginal constraints but we strongly recommand it to correcly estimate the GW distance.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error (>0)

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – Record log if True.

Returns:

fgw_dist – Fused Gromov-Wasserstein distance

Return type:

float

References

ot.gromov.entropic_gromov_barycenters(N, Cs, ps=None, p=None, lambdas=None, loss_fun='square_loss', epsilon=0.1, symmetric=True, max_iter=1000, tol=1e-09, warmstartT=False, verbose=False, log=False, init_C=None, random_state=None, **kwargs)[source]

Returns the Gromov-Wasserstein barycenters of S measured similarity matrices $$(\mathbf{C}_s)_{1 \leq s \leq S}$$ estimated using Gromov-Wasserstein transports from Sinkhorn projections.

The function solves the following optimization problem:

$\mathbf{C}^* = \mathop{\arg \min}_{\mathbf{C}\in \mathbb{R}^{N \times N}} \quad \sum_s \lambda_s \mathrm{GW}(\mathbf{C}, \mathbf{C}_s, \mathbf{p}, \mathbf{p}_s)$

Where :

• $$\mathbf{C}_s$$: metric cost matrix

• $$\mathbf{p}_s$$: distribution

Parameters:
• N (int) – Size of the targeted barycenter

• Cs (list of S array-like of shape (ns,ns)) – Metric cost matrices

• ps (list of S array-like of shape (ns,), optional) – Sample weights in the S spaces. If let to its default value None, uniform distributions are taken.

• p (array-like, shape (N,), optional) – Weights in the targeted barycenter. If let to its default value None, uniform distribution is taken.

• lambdas (list of float, optional) – List of the S spaces’ weights. If let to its default value None, uniform weights are taken.

• loss_fun (callable, optional) – tensor-matrix multiplication function based on specific loss function

• epsilon (float, optional) – Regularization term >0

• symmetric (bool, optional.) – Either structures are to be assumed symmetric or not. Default value is True. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error (>0)

• warmstartT (bool, optional) – Either to perform warmstart of transport plans in the successive gromov-wasserstein transport problems.

• verbose (bool, optional) – Print information along iterations.

• log (bool, optional) – Record log if True.

• init_C (bool | array-like, shape (N, N)) – Random initial value for the $$\mathbf{C}$$ matrix provided by user.

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

• **kwargs (dict) – parameters can be directly passed to the ot.entropic_gromov_wasserstein solver.

Returns:

• C (array-like, shape (N, N)) – Similarity matrix in the barycenter space (permutated arbitrarily)

• log (dict) – Log dictionary of error during iterations. Return only if log=True in parameters.

References

ot.gromov.entropic_gromov_wasserstein(C1, C2, p=None, q=None, loss_fun='square_loss', epsilon=0.1, symmetric=None, G0=None, max_iter=1000, tol=1e-09, solver='PGD', warmstart=False, verbose=False, log=False, **kwargs)[source]

Returns the Gromov-Wasserstein transport between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$ estimated using Sinkhorn projections.

If solver=”PGD”, the function solves the following entropic-regularized Gromov-Wasserstein optimization problem using Projected Gradient Descent :

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg\min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l} - \epsilon H(\mathbf{T})\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Else if solver=”PPA”, the function solves the following Gromov-Wasserstein optimization problem using Proximal Point Algorithm :

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg\min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity matrices

• H: entropy

Note

If the inner solver ot.sinkhorn did not convergence, the optimal coupling $$\mathbf{T}$$ returned by this function does not necessarily satisfy the marginal constraints $$\mathbf{T}\mathbf{1}=\mathbf{p}$$ and $$\mathbf{T}^T\mathbf{1}=\mathbf{q}$$. So the returned Gromov-Wasserstein loss does not necessarily satisfy distance properties and may be negative.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (string, optional) – Loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• epsilon (float, optional) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 will be used as initial transport of the solver. G0 is not required to satisfy marginal constraints but we strongly recommand it to correcly estimate the GW distance.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error (>0)

• solver (string, optional) – Solver to use either ‘PGD’ for Projected Gradient Descent or ‘PPA’ for Proximal Point Algorithm. Default value is ‘PGD’.

• warmstart (bool, optional) – Either to perform warmstart of dual potentials in the successive Sinkhorn projections.

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – Record log if True.

• **kwargs (dict) – parameters can be directly passed to the ot.sinkhorn solver. Such as numItermax and stopThr to control its estimation precision, e.g  suggests to use numItermax=1.

Returns:

T – Optimal coupling between the two spaces

Return type:

array-like, shape (ns, nt)

References

ot.gromov.entropic_gromov_wasserstein2(C1, C2, p=None, q=None, loss_fun='square_loss', epsilon=0.1, symmetric=None, G0=None, max_iter=1000, tol=1e-09, solver='PGD', warmstart=False, verbose=False, log=False, **kwargs)[source]

Returns the Gromov-Wasserstein loss $$\mathbf{GW}$$ between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$ estimated using Sinkhorn projections. To recover the Gromov-Wasserstein distance as defined in  compute $$d_{GW} = \frac{1}{2} \sqrt{\mathbf{GW}}$$.

If solver=”PGD”, the function solves the following entropic-regularized Gromov-Wasserstein optimization problem using Projected Gradient Descent :

\begin{align}\begin{aligned}\mathbf{GW} = \mathop{\min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l} - \epsilon H(\mathbf{T})\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Else if solver=”PPA”, the function solves the following Gromov-Wasserstein optimization problem using Proximal Point Algorithm :

\begin{align}\begin{aligned}\mathbf{GW} = \mathop{\min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity matrices

• H: entropy

Note

If the inner solver ot.sinkhorn did not convergence, the optimal coupling $$\mathbf{T}$$ returned by this function does not necessarily satisfy the marginal constraints $$\mathbf{T}\mathbf{1}=\mathbf{p}$$ and $$\mathbf{T}^T\mathbf{1}=\mathbf{q}$$. So the returned Gromov-Wasserstein loss does not necessarily satisfy distance properties and may be negative.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (string, optional) – Loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• epsilon (float, optional) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 will be used as initial transport of the solver. G0 is not required to satisfy marginal constraints but we strongly recommand it to correcly estimate the GW distance.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error (>0)

• solver (string, optional) – Solver to use either ‘PGD’ for Projected Gradient Descent or ‘PPA’ for Proximal Point Algorithm. Default value is ‘PGD’.

• warmstart (bool, optional) – Either to perform warmstart of dual potentials in the successive Sinkhorn projections.

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – Record log if True.

• **kwargs (dict) – parameters can be directly passed to the ot.sinkhorn solver. Such as numItermax and stopThr to control its estimation precision, e.g  suggests to use numItermax=1.

Returns:

gw_dist – Gromov-Wasserstein distance

Return type:

float

References

ot.gromov.entropic_semirelaxed_fused_gromov_wasserstein(M, C1, C2, p=None, loss_fun='square_loss', symmetric=None, epsilon=0.1, alpha=0.5, G0=None, max_iter=10000.0, tol=1e-09, log=False, verbose=False, **kwargs)[source]

Computes the entropic-regularized semi-relaxed FGW transport between two graphs (see ) estimated using a Mirror Descent algorithm following the KL geometry.

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg \min}_{\mathbf{T}} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

where :

• $$\mathbf{M}$$ is the (ns, nt) metric cost matrix between features

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$ source weights (sum to 1)

• L is a loss function to account for the misfit between the similarity matrices

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

The algorithm used for solving the problem is conditional gradient as discussed in 

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix representative of the structure in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix representative of the structure in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• epsilon (float) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error computed on transport plans

• log (bool, optional) – record log if True

• verbose (bool, optional) – Print information along iterations

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• G (array-like, shape (ns, nt)) – Optimal transportation matrix for the given parameters.

• log (dict) – Log dictionary return only if log==True in parameters.

References

ot.gromov.entropic_semirelaxed_fused_gromov_wasserstein2(M, C1, C2, p=None, loss_fun='square_loss', symmetric=None, epsilon=0.1, alpha=0.5, G0=None, max_iter=10000.0, tol=1e-09, log=False, verbose=False, **kwargs)[source]

Computes the entropic-regularized semi-relaxed FGW divergence between two graphs (see ) estimated using a Mirror Descent algorithm following the KL geometry.

\begin{align}\begin{aligned}\mathbf{srFGW}_{\alpha} = \min_{\mathbf{T}} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

where :

• $$\mathbf{M}$$ is the (ns, nt) metric cost matrix between features

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$ source weights (sum to 1)

• L is a loss function to account for the misfit between the similarity matrices

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

The algorithm used for solving the problem is conditional gradient as discussed in 

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix representative of the structure in the source space.

• C2 (array-like, shape (nt, nt)) – Metric cost matrix representative of the structure in the target space.

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str, optional) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• epsilon (float) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error computed on transport plans

• log (bool, optional) – record log if True

• verbose (bool, optional) – Print information along iterations

• **kwargs (dict) – Parameters can be directly passed to the ot.optim.cg solver.

Returns:

• srfgw-divergence (float) – Semi-relaxed Fused gromov wasserstein divergence for the given parameters.

• log (dict) – Log dictionary return only if log==True in parameters.

References

ot.gromov.entropic_semirelaxed_gromov_wasserstein(C1, C2, p=None, loss_fun='square_loss', epsilon=0.1, symmetric=None, G0=None, max_iter=10000.0, tol=1e-09, log=False, verbose=False, **kwargs)[source]

Returns the entropic-regularized semi-relaxed gromov-wasserstein divergence transport plan from $$(\mathbf{C_1}, \mathbf{p})$$ to $$\mathbf{C_2}$$ estimated using a Mirror Descent algorithm following the KL geometry.

The function solves the following optimization problem:

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg \min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• L: loss function to account for the misfit between the similarity matrices

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• epsilon (float) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymetric).

• verbose (bool, optional) – Print information along iterations

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error computed on transport plans

• log (bool, optional) – record log if True

• verbose – Print information along iterations

Returns:

• G (array-like, shape (ns, nt)) –

Coupling between the two spaces that minimizes:

$$\sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}$$

• log (dict) – Convergence information and loss.

References

ot.gromov.entropic_semirelaxed_gromov_wasserstein2(C1, C2, p=None, loss_fun='square_loss', epsilon=0.1, symmetric=None, G0=None, max_iter=10000.0, tol=1e-09, log=False, verbose=False, **kwargs)[source]

Returns the entropic-regularized semi-relaxed gromov-wasserstein divergence from $$(\mathbf{C_1}, \mathbf{p})$$ to $$\mathbf{C_2}$$ estimated using a Mirror Descent algorithm following the KL geometry.

The function solves the following optimization problem:

\begin{align}\begin{aligned}\mathbf{srGW} = \min_{\mathbf{T}} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• L: loss function to account for the misfit between the similarity matrices

Note that when using backends, this loss function is differentiable wrt the matrices (C1, C2) but not yet for the weights p. .. note:: This function is backend-compatible and will work on arrays

from all compatible backends. However all the steps in the conditional gradient are not differentiable.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• epsilon (float) – Regularization term >0

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymetric).

• verbose (bool, optional) – Print information along iterations

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on error computed on transport plans

• log (bool, optional) – record log if True

• verbose – Print information along iterations

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• srgw (float) – Semi-relaxed Gromov-Wasserstein divergence

• log (dict) – convergence information and Coupling matrix

References

ot.gromov.fgw_barycenters(N, Ys, Cs, ps=None, lambdas=None, alpha=0.5, fixed_structure=False, fixed_features=False, p=None, loss_fun='square_loss', armijo=False, symmetric=True, max_iter=100, tol=1e-09, warmstartT=False, verbose=False, log=False, init_C=None, init_X=None, random_state=None, **kwargs)[source]

Returns the Fused Gromov-Wasserstein barycenters of S measurable networks with node features $$(\mathbf{C}_s, \mathbf{Y}_s, \mathbf{p}_s)_{1 \leq s \leq S}$$ (see eq (5) in ), estimated using Fused Gromov-Wasserstein transports from Conditional Gradient solvers.

The function solves the following optimization problem:

$\mathbf{C}^*, \mathbf{Y}^* = \mathop{\arg \min}_{\mathbf{C}\in \mathbb{R}^{N \times N}, \mathbf{Y}\in \mathbb{Y}^{N \times d}} \quad \sum_s \lambda_s \mathrm{FGW}_{\alpha}(\mathbf{C}, \mathbf{C}_s, \mathbf{Y}, \mathbf{Y}_s, \mathbf{p}, \mathbf{p}_s)$

Where :

• $$\mathbf{Y}_s$$: feature matrix

• $$\mathbf{C}_s$$: metric cost matrix

• $$\mathbf{p}_s$$: distribution

Parameters:
• N (int) – Desired number of samples of the target barycenter

• Ys (list of array-like, each element has shape (ns,d)) – Features of all samples

• Cs (list of array-like, each element has shape (ns,ns)) – Structure matrices of all samples

• ps (list of array-like, each element has shape (ns,), optional) – Masses of all samples. If let to its default value None, uniform distributions are taken.

• lambdas (list of float, optional) – List of the S spaces’ weights. If let to its default value None, uniform weights are taken.

• alpha (float, optional) – Alpha parameter for the fgw distance.

• fixed_structure (bool) – Whether to fix the structure of the barycenter during the updates

• fixed_features (bool) – Whether to fix the feature of the barycenter during the updates

• p (array-like, shape (N,), optional) – Weights in the targeted barycenter. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – Loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• symmetric (bool, optional) – Either structures are to be assumed symmetric or not. Default value is True. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on relative error (>0)

• warmstartT (bool, optional) – Either to perform warmstart of transport plans in the successive fused gromov-wasserstein transport problems.

• verbose (bool, optional) – Print information along iterations.

• log (bool, optional) – Record log if True.

• init_C (array-like, shape (N,N), optional) – Initialization for the barycenters’ structure matrix. If not set a random init is used.

• init_X (array-like, shape (N,d), optional) – Initialization for the barycenters’ features. If not set a random init is used.

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

Returns:

• X (array-like, shape (N, d)) – Barycenters’ features

• C (array-like, shape (N, N)) – Barycenters’ structure matrix

• log (dict) – Only returned when log=True. It contains the keys:

• $$\mathbf{T}$$: list of (N, ns) transport matrices

• $$(\mathbf{M}_s)_s$$: all distance matrices between the feature of the barycenter and the other features $$(dist(\mathbf{X}, \mathbf{Y}_s))_s$$ shape (N, ns)

References

ot.gromov.fused_gromov_wasserstein(M, C1, C2, p=None, q=None, loss_fun='square_loss', symmetric=None, alpha=0.5, armijo=False, G0=None, log=False, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Returns the Fused Gromov-Wasserstein transport between $$(\mathbf{C_1}, \mathbf{Y_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{Y_2}, \mathbf{q})$$ with pairwise distance matrix $$\mathbf{M}$$ between node feature matrices $$\mathbf{Y_1}$$ and $$\mathbf{Y_2}$$ (see ).

The function solves the following optimization problem using Conditional Gradient:

\begin{align}\begin{aligned}\mathbf{T}^* \in\mathop{\arg\min}_\mathbf{T} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{M}$$: metric cost matrix between features across domains

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity and feature matrices

• $$\alpha$$: trade-off parameter

Note

This function is backend-compatible and will work on arrays from all compatible backends. But the algorithm uses the C++ CPU backend which can lead to copy overhead on GPU arrays.

Note

All computations in the conjugate gradient solver are done with numpy to limit memory overhead.

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix representative of the structure in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix representative of the structure in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (str, optional) – Loss function used for the solver

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• armijo (bool, optional) – If True the step of the line-search is found via an armijo research. Else closed form is used. If there are convergence issues use False.

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• log (bool, optional) – record log if True

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• T (array-like, shape (ns, nt)) – Optimal transportation matrix for the given parameters.

• log (dict) – Log dictionary return only if log==True in parameters.

References

ot.gromov.fused_gromov_wasserstein2(M, C1, C2, p=None, q=None, loss_fun='square_loss', symmetric=None, alpha=0.5, armijo=False, G0=None, log=False, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Returns the Fused Gromov-Wasserstein distance between $$(\mathbf{C_1}, \mathbf{Y_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{Y_2}, \mathbf{q})$$ with pairwise distance matrix $$\mathbf{M}$$ between node feature matrices $$\mathbf{Y_1}$$ and $$\mathbf{Y_2}$$ (see ).

The function solves the following optimization problem using Conditional Gradient:

\begin{align}\begin{aligned}\mathbf{FGW} = \mathop{\min}_\mathbf{T} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{M}$$: metric cost matrix between features across domains

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity and feature matrices

• $$\alpha$$: trade-off parameter

Note that when using backends, this loss function is differentiable wrt the matrices (C1, C2, M) and weights (p, q) for quadratic loss using the gradients from _.

Note

This function is backend-compatible and will work on arrays from all compatible backends. But the algorithm uses the C++ CPU backend which can lead to copy overhead on GPU arrays.

Note

All computations in the conjugate gradient solver are done with numpy to limit memory overhead.

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix representative of the structure in the source space.

• C2 (array-like, shape (nt, nt)) – Metric cost matrix representative of the structure in the target space.

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (str, optional) – Loss function used for the solver.

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• armijo (bool, optional) – If True the step of the line-search is found via an armijo research. Else closed form is used. If there are convergence issues use False.

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• log (bool, optional) – Record log if True.

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – Parameters can be directly passed to the ot.optim.cg solver.

Returns:

• fgw-distance (float) – Fused Gromov-Wasserstein distance for the given parameters.

• log (dict) – Log dictionary return only if log==True in parameters.

References

ot.gromov.fused_gromov_wasserstein_dictionary_learning(Cs, Ys, D, nt, alpha, reg=0.0, ps=None, q=None, epochs=20, batch_size=32, learning_rate_C=1.0, learning_rate_Y=1.0, Cdict_init=None, Ydict_init=None, projection='nonnegative_symmetric', use_log=False, tol_outer=1e-05, tol_inner=1e-05, max_iter_outer=20, max_iter_inner=200, use_adam_optimizer=True, verbose=False, **kwargs)[source]

Infer Fused Gromov-Wasserstein linear dictionary $$\{ (\mathbf{C_{dict}[d]}, \mathbf{Y_{dict}[d]}, \mathbf{q}) \}_{d \in [D]}$$ from the list of S attributed structures $$\{ (\mathbf{C_s}, \mathbf{Y_s},\mathbf{p_s}) \}_s$$

$\begin{split}\min_{\mathbf{C_{dict}},\mathbf{Y_{dict}}, \{\mathbf{w_s}\}_{s}} \sum_{s=1}^S FGW_{2,\alpha}(\mathbf{C_s}, \mathbf{Y_s}, \sum_{d=1}^D w_{s,d}\mathbf{C_{dict}[d]},\sum_{d=1}^D w_{s,d}\mathbf{Y_{dict}[d]}, \mathbf{p_s}, \mathbf{q}) \\ - reg\| \mathbf{w_s} \|_2^2\end{split}$

Such that $$\forall s \leq S$$ :

• $$\mathbf{w_s}^\top \mathbf{1}_D = 1$$

• $$\mathbf{w_s} \geq \mathbf{0}_D$$

Where :

• $$\forall s \leq S, \mathbf{C_s}$$ is a (ns,ns) pairwise similarity matrix of variable size ns.

• $$\forall s \leq S, \mathbf{Y_s}$$ is a (ns,d) features matrix of variable size ns and fixed dimension d.

• $$\mathbf{C_{dict}}$$ is a (D, nt, nt) tensor of D pairwise similarity matrix of fixed size nt.

• $$\mathbf{Y_{dict}}$$ is a (D, nt, d) tensor of D features matrix of fixed size nt and fixed dimension d.

• $$\forall s \leq S, \mathbf{p_s}$$ is the source distribution corresponding to $$\mathbf{C_s}$$

• $$\mathbf{q}$$ is the target distribution assigned to every structures in the embedding space.

• $$\alpha$$ is the trade-off parameter of Fused Gromov-Wasserstein

• reg is the regularization coefficient.

The stochastic algorithm used for estimating the attributed graph dictionary atoms as proposed in _

Parameters:
• Cs (list of S symmetric array-like, shape (ns, ns)) – List of Metric/Graph cost matrices of variable size (ns,ns).

• Ys (list of S array-like, shape (ns, d)) – List of feature matrix of variable size (ns,d) with d fixed.

• D (int) – Number of dictionary atoms to learn

• nt (int) – Number of samples within each dictionary atoms

• alpha (float) – Trade-off parameter of Fused Gromov-Wasserstein

• reg (float, optional) – Coefficient of the negative quadratic regularization used to promote sparsity of w. The default is 0.

• ps (list of S array-like, shape (ns,), optional) – Distribution in each source space C of Cs. Default is None and corresponds to uniform distibutions.

• q (array-like, shape (nt,), optional) – Distribution in the embedding space whose structure will be learned. Default is None and corresponds to uniform distributions.

• epochs (int, optional) – Number of epochs used to learn the dictionary. Default is 32.

• batch_size (int, optional) – Batch size for each stochastic gradient update of the dictionary. Set to the dataset size if the provided batch_size is higher than the dataset size. Default is 32.

• learning_rate_C (float, optional) – Learning rate used for the stochastic gradient descent on Cdict. Default is 1.

• learning_rate_Y (float, optional) – Learning rate used for the stochastic gradient descent on Ydict. Default is 1.

• Cdict_init (list of D array-like with shape (nt, nt), optional) – Used to initialize the dictionary structures Cdict. If set to None (Default), the dictionary will be initialized randomly. Else Cdict must have shape (D, nt, nt) i.e match provided shape features.

• Ydict_init (list of D array-like with shape (nt, d), optional) – Used to initialize the dictionary features Ydict. If set to None, the dictionary features will be initialized randomly. Else Ydict must have shape (D, nt, d) where d is the features dimension of inputs Ys and also match provided shape features.

• projection (str, optional) – If ‘nonnegative’ and/or ‘symmetric’ is in projection, the corresponding projection will be performed at each stochastic update of the dictionary Else the set of atoms is $$R^{nt * nt}$$. Default is ‘nonnegative_symmetric’

• log (bool, optional) – If set to True, losses evolution by batches and epochs are tracked. Default is False.

• use_adam_optimizer (bool, optional) – If set to True, adam optimizer with default settings is used as adaptative learning rate strategy. Else perform SGD with fixed learning rate. Default is True.

• tol_outer (float, optional) – Solver precision for the BCD algorithm, measured by absolute relative error on consecutive losses. Default is $$10^{-5}$$.

• tol_inner (float, optional) – Solver precision for the Conjugate Gradient algorithm used to get optimal w at a fixed transport, measured by absolute relative error on consecutive losses. Default is $$10^{-5}$$.

• max_iter_outer (int, optional) – Maximum number of iterations for the BCD. Default is 20.

• max_iter_inner (int, optional) – Maximum number of iterations for the Conjugate Gradient. Default is 200.

• verbose (bool, optional) – Print the reconstruction loss every epoch. Default is False.

Returns:

• Cdict_best_state (D array-like, shape (D,nt,nt)) – Metric/Graph cost matrices composing the dictionary. The dictionary leading to the best loss over an epoch is saved and returned.

• Ydict_best_state (D array-like, shape (D,nt,d)) – Feature matrices composing the dictionary. The dictionary leading to the best loss over an epoch is saved and returned.

• log (dict) – If use_log is True, contains loss evolutions by batches and epochs.

References

ot.gromov.fused_gromov_wasserstein_linear_unmixing(C, Y, Cdict, Ydict, alpha, reg=0.0, p=None, q=None, tol_outer=1e-05, tol_inner=1e-05, max_iter_outer=20, max_iter_inner=200, symmetric=True, **kwargs)[source]

Returns the Fused Gromov-Wasserstein linear unmixing of $$(\mathbf{C},\mathbf{Y},\mathbf{p})$$ onto the attributed dictionary atoms $$\{ (\mathbf{C_{dict}[d]},\mathbf{Y_{dict}[d]}, \mathbf{q}) \}_{d \in [D]}$$

$\min_{\mathbf{w}} FGW_{2,\alpha}(\mathbf{C},\mathbf{Y}, \sum_{d=1}^D w_d\mathbf{C_{dict}[d]},\sum_{d=1}^D w_d\mathbf{Y_{dict}[d]}, \mathbf{p}, \mathbf{q}) - reg \| \mathbf{w} \|_2^2$

such that, $$\forall s \leq S$$ :

• $$\mathbf{w_s}^\top \mathbf{1}_D = 1$$

• $$\mathbf{w_s} \geq \mathbf{0}_D$$

Where :

• $$\mathbf{C}$$ is a (ns,ns) pairwise similarity matrix of variable size ns.

• $$\mathbf{Y}$$ is a (ns,d) features matrix of variable size ns and fixed dimension d.

• $$\mathbf{C_{dict}}$$ is a (D, nt, nt) tensor of D pairwise similarity matrix of fixed size nt.

• $$\mathbf{Y_{dict}}$$ is a (D, nt, d) tensor of D features matrix of fixed size nt and fixed dimension d.

• $$\mathbf{p}$$ is the source distribution corresponding to $$\mathbf{C_s}$$

• $$\mathbf{q}$$ is the target distribution assigned to every structures in the embedding space.

• $$\alpha$$ is the trade-off parameter of Fused Gromov-Wasserstein

• reg is the regularization coefficient.

The algorithm used for solving the problem is a Block Coordinate Descent as discussed in _, algorithm 6.

Parameters:
• C (array-like, shape (ns, ns)) – Metric/Graph cost matrix.

• Y (array-like, shape (ns, d)) – Feature matrix.

• Cdict (D array-like, shape (D,nt,nt)) – Metric/Graph cost matrices composing the dictionary on which to embed (C,Y).

• Ydict (D array-like, shape (D,nt,d)) – Feature matrices composing the dictionary on which to embed (C,Y).

• alpha (float,) – Trade-off parameter of Fused Gromov-Wasserstein.

• reg (float, optional) – Coefficient of the negative quadratic regularization used to promote sparsity of w. The default is 0.

• p (array-like, shape (ns,), optional) – Distribution in the source space C. Default is None and corresponds to uniform distribution.

• q (array-like, shape (nt,), optional) – Distribution in the space depicted by the dictionary. Default is None and corresponds to uniform distribution.

• tol_outer (float, optional) – Solver precision for the BCD algorithm.

• tol_inner (float, optional) – Solver precision for the Conjugate Gradient algorithm used to get optimal w at a fixed transport. Default is $$10^{-5}$$.

• max_iter_outer (int, optional) – Maximum number of iterations for the BCD. Default is 20.

• max_iter_inner (int, optional) – Maximum number of iterations for the Conjugate Gradient. Default is 200.

Returns:

• w (array-like, shape (D,)) – fused Gromov-Wasserstein linear unmixing of (C,Y,p) onto the span of the dictionary.

• Cembedded (array-like, shape (nt,nt)) – embedded structure of $$(\mathbf{C},\mathbf{Y}, \mathbf{p})$$ onto the dictionary, $$\sum_d w_d\mathbf{C_{dict}[d]}$$.

• Yembedded (array-like, shape (nt,d)) – embedded features of $$(\mathbf{C},\mathbf{Y}, \mathbf{p})$$ onto the dictionary, $$\sum_d w_d\mathbf{Y_{dict}[d]}$$.

• T (array-like (ns,nt)) – Fused Gromov-Wasserstein transport plan between $$(\mathbf{C},\mathbf{p})$$ and $$(\sum_d w_d\mathbf{C_{dict}[d]}, \sum_d w_d\mathbf{Y_{dict}[d]},\mathbf{q})$$.

• current_loss (float) – reconstruction error

References

ot.gromov.gromov_barycenters(N, Cs, ps=None, p=None, lambdas=None, loss_fun='square_loss', symmetric=True, armijo=False, max_iter=1000, tol=1e-09, warmstartT=False, verbose=False, log=False, init_C=None, random_state=None, **kwargs)[source]

Returns the Gromov-Wasserstein barycenters of S measured similarity matrices $$(\mathbf{C}_s)_{1 \leq s \leq S}$$

The function solves the following optimization problem with block coordinate descent:

$\mathbf{C}^* = \mathop{\arg \min}_{\mathbf{C}\in \mathbb{R}^{N \times N}} \quad \sum_s \lambda_s \mathrm{GW}(\mathbf{C}, \mathbf{C}_s, \mathbf{p}, \mathbf{p}_s)$

Where :

• $$\mathbf{C}_s$$: metric cost matrix

• $$\mathbf{p}_s$$: distribution

Parameters:
• N (int) – Size of the targeted barycenter

• Cs (list of S array-like of shape (ns, ns)) – Metric cost matrices

• ps (list of S array-like of shape (ns,), optional) – Sample weights in the S spaces. If let to its default value None, uniform distributions are taken.

• p (array-like, shape (N,), optional) – Weights in the targeted barycenter. If let to its default value None, uniform distribution is taken.

• lambdas (list of float, optional) – List of the S spaces’ weights. If let to its default value None, uniform weights are taken.

• loss_fun (callable, optional) – tensor-matrix multiplication function based on specific loss function

• symmetric (bool, optional.) – Either structures are to be assumed symmetric or not. Default value is True. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• update (callable) – function($$\mathbf{p}$$, lambdas, $$\mathbf{T}$$, $$\mathbf{Cs}$$) that updates $$\mathbf{C}$$ according to a specific Kernel with the S $$\mathbf{T}_s$$ couplings calculated at each iteration

• max_iter (int, optional) – Max number of iterations

• tol (float, optional) – Stop threshold on relative error (>0)

• warmstartT (bool, optional) – Either to perform warmstart of transport plans in the successive fused gromov-wasserstein transport problems.s

• verbose (bool, optional) – Print information along iterations.

• log (bool, optional) – Record log if True.

• init_C (bool | array-like, shape(N,N)) – Random initial value for the $$\mathbf{C}$$ matrix provided by user.

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

Returns:

• C (array-like, shape (N, N)) – Similarity matrix in the barycenter space (permutated arbitrarily)

• log (dict) – Log dictionary of error during iterations. Return only if log=True in parameters.

References

ot.gromov.gromov_wasserstein(C1, C2, p=None, q=None, loss_fun='square_loss', symmetric=None, log=False, armijo=False, G0=None, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Returns the Gromov-Wasserstein transport between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$.

The function solves the following optimization problem using Conditional Gradient:

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg \min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{\gamma} \mathbf{1} &= \mathbf{p}\\ \mathbf{\gamma}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{\gamma} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity matrices

Note

This function is backend-compatible and will work on arrays from all compatible backends. But the algorithm uses the C++ CPU backend which can lead to copy overhead on GPU arrays.

Note

All computations in the conjugate gradient solver are done with numpy to limit memory overhead.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (str, optional) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – record log if True

• armijo (bool, optional) – If True the step of the line-search is found via an armijo research. Else closed form is used. If there are convergence issues use False.

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• T (array-like, shape (ns, nt)) –

Coupling between the two spaces that minimizes:

$$\sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}$$

• log (dict) – Convergence information and loss.

References

ot.gromov.gromov_wasserstein2(C1, C2, p=None, q=None, loss_fun='square_loss', symmetric=None, log=False, armijo=False, G0=None, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Returns the Gromov-Wasserstein loss $$\mathbf{GW}$$ between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$. To recover the Gromov-Wasserstein distance as defined in  compute $$d_{GW} = \frac{1}{2} \sqrt{\mathbf{GW}}$$.

The function solves the following optimization problem using Conditional Gradient:

\begin{align}\begin{aligned}\mathbf{GW} = \min_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{\gamma} \mathbf{1} &= \mathbf{p}\\ \mathbf{\gamma}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{\gamma} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity matrices

Note that when using backends, this loss function is differentiable wrt the matrices (C1, C2) and weights (p, q) for quadratic loss using the gradients from _.

Note

This function is backend-compatible and will work on arrays from all compatible backends. But the algorithm uses the C++ CPU backend which can lead to copy overhead on GPU arrays.

Note

All computations in the conjugate gradient solver are done with numpy to limit memory overhead.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• q (array-like, shape (nt,), optional) – Distribution in the target space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – record log if True

• armijo (bool, optional) – If True the step of the line-search is found via an armijo research. Else closed form is used. If there are convergence issues use False.

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• gw_dist (float) – Gromov-Wasserstein distance

• log (dict) – convergence information and Coupling matrix

References

ot.gromov.gromov_wasserstein_dictionary_learning(Cs, D, nt, reg=0.0, ps=None, q=None, epochs=20, batch_size=32, learning_rate=1.0, Cdict_init=None, projection='nonnegative_symmetric', use_log=True, tol_outer=1e-05, tol_inner=1e-05, max_iter_outer=20, max_iter_inner=200, use_adam_optimizer=True, verbose=False, **kwargs)[source]

Infer Gromov-Wasserstein linear dictionary $$\{ (\mathbf{C_{dict}[d]}, q) \}_{d \in [D]}$$ from the list of structures $$\{ (\mathbf{C_s},\mathbf{p_s}) \}_s$$

$\min_{\mathbf{C_{dict}}, \{\mathbf{w_s} \}_{s \leq S}} \sum_{s=1}^S GW_2(\mathbf{C_s}, \sum_{d=1}^D w_{s,d}\mathbf{C_{dict}[d]}, \mathbf{p_s}, \mathbf{q}) - reg\| \mathbf{w_s} \|_2^2$

such that, $$\forall s \leq S$$ :

• $$\mathbf{w_s}^\top \mathbf{1}_D = 1$$

• $$\mathbf{w_s} \geq \mathbf{0}_D$$

Where :

• $$\forall s \leq S, \mathbf{C_s}$$ is a (ns,ns) pairwise similarity matrix of variable size ns.

• $$\mathbf{C_{dict}}$$ is a (D, nt, nt) tensor of D pairwise similarity matrix of fixed size nt.

• $$\forall s \leq S, \mathbf{p_s}$$ is the source distribution corresponding to $$\mathbf{C_s}$$

• $$\mathbf{q}$$ is the target distribution assigned to every structures in the embedding space.

• reg is the regularization coefficient.

The stochastic algorithm used for estimating the graph dictionary atoms as proposed in _

Parameters:
• Cs (list of S symmetric array-like, shape (ns, ns)) – List of Metric/Graph cost matrices of variable size (ns, ns).

• D (int) – Number of dictionary atoms to learn

• nt (int) – Number of samples within each dictionary atoms

• reg (float, optional) – Coefficient of the negative quadratic regularization used to promote sparsity of w. The default is 0.

• ps (list of S array-like, shape (ns,), optional) – Distribution in each source space C of Cs. Default is None and corresponds to uniform distibutions.

• q (array-like, shape (nt,), optional) – Distribution in the embedding space whose structure will be learned. Default is None and corresponds to uniform distributions.

• epochs (int, optional) – Number of epochs used to learn the dictionary. Default is 32.

• batch_size (int, optional) – Batch size for each stochastic gradient update of the dictionary. Set to the dataset size if the provided batch_size is higher than the dataset size. Default is 32.

• learning_rate (float, optional) – Learning rate used for the stochastic gradient descent. Default is 1.

• Cdict_init (list of D array-like with shape (nt, nt), optional) – Used to initialize the dictionary. If set to None (Default), the dictionary will be initialized randomly. Else Cdict must have shape (D, nt, nt) i.e match provided shape features.

• projection (str , optional) – If ‘nonnegative’ and/or ‘symmetric’ is in projection, the corresponding projection will be performed at each stochastic update of the dictionary Else the set of atoms is $$R^{nt * nt}$$. Default is ‘nonnegative_symmetric’

• log (bool, optional) – If set to True, losses evolution by batches and epochs are tracked. Default is False.

• use_adam_optimizer (bool, optional) – If set to True, adam optimizer with default settings is used as adaptative learning rate strategy. Else perform SGD with fixed learning rate. Default is True.

• tol_outer (float, optional) – Solver precision for the BCD algorithm, measured by absolute relative error on consecutive losses. Default is $$10^{-5}$$.

• tol_inner (float, optional) – Solver precision for the Conjugate Gradient algorithm used to get optimal w at a fixed transport, measured by absolute relative error on consecutive losses. Default is $$10^{-5}$$.

• max_iter_outer (int, optional) – Maximum number of iterations for the BCD. Default is 20.

• max_iter_inner (int, optional) – Maximum number of iterations for the Conjugate Gradient. Default is 200.

• verbose (bool, optional) – Print the reconstruction loss every epoch. Default is False.

Returns:

• Cdict_best_state (D array-like, shape (D,nt,nt)) – Metric/Graph cost matrices composing the dictionary. The dictionary leading to the best loss over an epoch is saved and returned.

• log (dict) – If use_log is True, contains loss evolutions by batches and epochs.

References

ot.gromov.gromov_wasserstein_linear_unmixing(C, Cdict, reg=0.0, p=None, q=None, tol_outer=1e-05, tol_inner=1e-05, max_iter_outer=20, max_iter_inner=200, symmetric=None, **kwargs)[source]

Returns the Gromov-Wasserstein linear unmixing of $$(\mathbf{C},\mathbf{p})$$ onto the dictionary $$\{ (\mathbf{C_{dict}[d]}, \mathbf{q}) \}_{d \in [D]}$$.

$\min_{ \mathbf{w}} GW_2(\mathbf{C}, \sum_{d=1}^D w_d\mathbf{C_{dict}[d]}, \mathbf{p}, \mathbf{q}) - reg \| \mathbf{w} \|_2^2$

such that:

• $$\mathbf{w}^\top \mathbf{1}_D = 1$$

• $$\mathbf{w} \geq \mathbf{0}_D$$

Where :

• $$\mathbf{C}$$ is the (ns,ns) pairwise similarity matrix.

• $$\mathbf{C_{dict}}$$ is a (D, nt, nt) tensor of D pairwise similarity matrices of size nt.

• $$\mathbf{p}$$ and $$\mathbf{q}$$ are source and target weights.

• reg is the regularization coefficient.

The algorithm used for solving the problem is a Block Coordinate Descent as discussed in _ , algorithm 1.

Parameters:
• C (array-like, shape (ns, ns)) – Metric/Graph cost matrix.

• Cdict (D array-like, shape (D,nt,nt)) – Metric/Graph cost matrices composing the dictionary on which to embed C.

• reg (float, optional.) – Coefficient of the negative quadratic regularization used to promote sparsity of w. Default is 0.

• p (array-like, shape (ns,), optional) – Distribution in the source space C. Default is None and corresponds to uniform distribution.

• q (array-like, shape (nt,), optional) – Distribution in the space depicted by the dictionary. Default is None and corresponds to uniform distribution.

• tol_outer (float, optional) – Solver precision for the BCD algorithm.

• tol_inner (float, optional) – Solver precision for the Conjugate Gradient algorithm used to get optimal w at a fixed transport. Default is $$10^{-5}$$.

• max_iter_outer (int, optional) – Maximum number of iterations for the BCD. Default is 20.

• max_iter_inner (int, optional) – Maximum number of iterations for the Conjugate Gradient. Default is 200.

Returns:

• w (array-like, shape (D,)) – Gromov-Wasserstein linear unmixing of $$(\mathbf{C},\mathbf{p})$$ onto the span of the dictionary.

• Cembedded (array-like, shape (nt,nt)) – embedded structure of $$(\mathbf{C},\mathbf{p})$$ onto the dictionary, $$\sum_d w_d\mathbf{C_{dict}[d]}$$.

• T (array-like (ns, nt)) – Gromov-Wasserstein transport plan between $$(\mathbf{C},\mathbf{p})$$ and $$(\sum_d w_d\mathbf{C_{dict}[d]}, \mathbf{q})$$

• current_loss (float) – reconstruction error

References

The gradient is computed as described in Proposition 2 in 

Parameters:
• constC (array-like, shape (ns, nt)) – Constant $$\mathbf{C}$$ matrix in Eq. (6)

• hC1 (array-like, shape (ns, ns)) – $$\mathbf{h1}(\mathbf{C1})$$ matrix in Eq. (6)

• hC2 (array-like, shape (nt, nt)) – $$\mathbf{h2}(\mathbf{C2})$$ matrix in Eq. (6)

• T (array-like, shape (ns, nt)) – Current value of transport matrix $$\mathbf{T}$$

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

Return type:

array-like, shape (ns, nt)

References

ot.gromov.gwloss(constC, hC1, hC2, T, nx=None)[source]

Return the Loss for Gromov-Wasserstein

The loss is computed as described in Proposition 1 Eq. (6) in 

Parameters:
• constC (array-like, shape (ns, nt)) – Constant $$\mathbf{C}$$ matrix in Eq. (6)

• hC1 (array-like, shape (ns, ns)) – $$\mathbf{h1}(\mathbf{C1})$$ matrix in Eq. (6)

• hC2 (array-like, shape (nt, nt)) – $$\mathbf{h2}(\mathbf{C2})$$ matrix in Eq. (6)

• T (array-like, shape (ns, nt)) – Current value of transport matrix $$\mathbf{T}$$

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

loss – Gromov-Wasserstein loss

Return type:

float

References

ot.gromov.init_matrix(C1, C2, p, q, loss_fun='square_loss', nx=None)[source]

Return loss matrices and tensors for Gromov-Wasserstein fast computation

Returns the value of $$\mathcal{L}(\mathbf{C_1}, \mathbf{C_2}) \otimes \mathbf{T}$$ with the selected loss function as the loss function of Gromov-Wasserstein discrepancy.

The matrices are computed as described in Proposition 1 in 

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{T}$$: A coupling between those two spaces

The square-loss function $$L(a, b) = |a - b|^2$$ is read as :

\begin{align}\begin{aligned}L(a, b) = f_1(a) + f_2(b) - h_1(a) h_2(b)\\\mathrm{with} \ f_1(a) &= a^2\\ f_2(b) &= b^2\\ h_1(a) &= a\\ h_2(b) &= 2b\end{aligned}\end{align}

The kl-loss function $$L(a, b) = a \log\left(\frac{a}{b}\right) - a + b$$ is read as :

\begin{align}\begin{aligned}L(a, b) = f_1(a) + f_2(b) - h_1(a) h_2(b)\\\mathrm{with} \ f_1(a) &= a \log(a) - a\\ f_2(b) &= b\\ h_1(a) &= a\\ h_2(b) &= \log(b)\end{aligned}\end{align}
Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,)) – Probability distribution in the source space

• q (array-like, shape (nt,)) – Probability distribution in the target space

• loss_fun (str, optional) – Name of loss function to use: either ‘square_loss’ or ‘kl_loss’ (default=’square_loss’)

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

• constC (array-like, shape (ns, nt)) – Constant $$\mathbf{C}$$ matrix in Eq. (6)

• hC1 (array-like, shape (ns, ns)) – $$\mathbf{h1}(\mathbf{C1})$$ matrix in Eq. (6)

• hC2 (array-like, shape (nt, nt)) – $$\mathbf{h2}(\mathbf{C2})$$ matrix in Eq. (6)

References

ot.gromov.init_matrix_semirelaxed(C1, C2, p, loss_fun='square_loss', nx=None)[source]

Return loss matrices and tensors for semi-relaxed Gromov-Wasserstein fast computation

Returns the value of $$\mathcal{L}(\mathbf{C_1}, \mathbf{C_2}) \otimes \mathbf{T}$$ with the selected loss function as the loss function of semi-relaxed Gromov-Wasserstein discrepancy.

The matrices are computed as described in Proposition 1 in  and adapted to the semi-relaxed problem where the second marginal is not a constant anymore.

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{T}$$: A coupling between those two spaces

The square-loss function $$L(a, b) = |a - b|^2$$ is read as :

\begin{align}\begin{aligned}L(a, b) = f_1(a) + f_2(b) - h_1(a) h_2(b)\\\mathrm{with} \ f_1(a) &= a^2\\ f_2(b) &= b^2\\ h_1(a) &= a\\ h_2(b) &= 2b\end{aligned}\end{align}
Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• T (array-like, shape (ns, nt)) – Coupling between source and target spaces

• p (array-like, shape (ns,)) –

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

• constC (array-like, shape (ns, nt)) – Constant $$\mathbf{C}$$ matrix in Eq. (6) adapted to srGW

• hC1 (array-like, shape (ns, ns)) – $$\mathbf{h1}(\mathbf{C1})$$ matrix in Eq. (6)

• hC2 (array-like, shape (nt, nt)) – $$\mathbf{h2}(\mathbf{C2})$$ matrix in Eq. (6)

• fC2t (array-like, shape (nt, nt)) – $$\mathbf{f2}(\mathbf{C2})^\top$$ matrix in Eq. (6)

References

ot.gromov.pointwise_gromov_wasserstein(C1, C2, p, q, loss_fun, alpha=1, max_iter=100, threshold_plan=0, log=False, verbose=False, random_state=None)[source]

Returns the gromov-wasserstein transport between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$ using a stochastic Frank-Wolfe. This method has a $$\mathcal{O}(\mathrm{max\_iter} \times PN^2)$$ time complexity with P the number of Sinkhorn iterations.

The function solves the following optimization problem:

\begin{align}\begin{aligned}\mathbf{GW} = \mathop{\arg \min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity matrices

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,)) – Distribution in the source space

• q (array-like, shape (nt,)) – Distribution in the target space

• loss_fun (function: $$\mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}$$) – Loss function used for the distance, the transport plan does not depend on the loss function

• alpha (float) – Step of the Frank-Wolfe algorithm, should be between 0 and 1

• max_iter (int, optional) – Max number of iterations

• threshold_plan (float, optional) – Deleting very small values in the transport plan. If above zero, it violates the marginal constraints.

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – Gives the distance estimated and the standard deviation

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

Returns:

T – Optimal coupling between the two spaces

Return type:

array-like, shape (ns, nt)

References

ot.gromov.sampled_gromov_wasserstein(C1, C2, p, q, loss_fun, nb_samples_grad=100, epsilon=1, max_iter=500, log=False, verbose=False, random_state=None)[source]

Returns the gromov-wasserstein transport between $$(\mathbf{C_1}, \mathbf{p})$$ and $$(\mathbf{C_2}, \mathbf{q})$$ using a 1-stochastic Frank-Wolfe. This method has a $$\mathcal{O}(\mathrm{max\_iter} \times N \log(N))$$ time complexity by relying on the 1D Optimal Transport solver.

The function solves the following optimization problem:

\begin{align}\begin{aligned}\mathbf{GW} = \mathop{\arg \min}_\mathbf{T} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T}^T \mathbf{1} &= \mathbf{q}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• $$\mathbf{q}$$: distribution in the target space

• L: loss function to account for the misfit between the similarity matrices

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,)) – Distribution in the source space

• q (array-like, shape (nt,)) – Distribution in the target space

• loss_fun (function: $$\mathbb{R} \times \mathbb{R} \mapsto \mathbb{R}$$) – Loss function used for the distance, the transport plan does not depend on the loss function

• epsilon (float) – Weight of the Kullback-Leibler regularization

• max_iter (int, optional) – Max number of iterations

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – Gives the distance estimated and the standard deviation

• random_state (int or RandomState instance, optional) – Fix the seed for reproducibility

Returns:

T – Optimal coupling between the two spaces

Return type:

array-like, shape (ns, nt)

References

ot.gromov.semirelaxed_fused_gromov_wasserstein(M, C1, C2, p=None, loss_fun='square_loss', symmetric=None, alpha=0.5, G0=None, log=False, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Computes the semi-relaxed Fused Gromov-Wasserstein transport between two graphs (see ).

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg \min}_{\mathbf{T}} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) T_{i,j} T_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

where :

• $$\mathbf{M}$$ is the (ns, nt) metric cost matrix

• $$\mathbf{p}$$ source weights (sum to 1)

• L is a loss function to account for the misfit between the similarity matrices

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

The algorithm used for solving the problem is conditional gradient as discussed in 

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix representative of the structure in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix representative of the structure in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• log (bool, optional) – record log if True

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• gamma (array-like, shape (ns, nt)) – Optimal transportation matrix for the given parameters.

• log (dict) – Log dictionary return only if log==True in parameters.

References

ot.gromov.semirelaxed_fused_gromov_wasserstein2(M, C1, C2, p=None, loss_fun='square_loss', symmetric=None, alpha=0.5, G0=None, log=False, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Computes the semi-relaxed FGW divergence between two graphs (see ).

\begin{align}\begin{aligned}\mathbf{srFGW}_{\alpha} = \min_{\mathbf{T}} \quad (1 - \alpha) \langle \mathbf{T}, \mathbf{M} \rangle_F + \alpha \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) T_{i,j} T_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

where :

• $$\mathbf{M}$$ is the (ns, nt) metric cost matrix

• $$\mathbf{p}$$ source weights (sum to 1)

• L is a loss function to account for the misfit between the similarity matrices

The algorithm used for solving the problem is conditional gradient as discussed in 

Note that when using backends, this loss function is differentiable wrt the matrices (C1, C2) but not yet for the weights p.

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

Parameters:
• M (array-like, shape (ns, nt)) – Metric cost matrix between features across domains

• C1 (array-like, shape (ns, ns)) – Metric cost matrix representative of the structure in the source space.

• C2 (array-like, shape (nt, nt)) – Metric cost matrix representative of the structure in the target space.

• p (array-like, shape (ns,)) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str, optional) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• alpha (float, optional) – Trade-off parameter (0 < alpha < 1)

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• log (bool, optional) – Record log if True.

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – Parameters can be directly passed to the ot.optim.cg solver.

Returns:

• srfgw-divergence (float) – Semi-relaxed Fused Gromov-Wasserstein divergence for the given parameters.

• log (dict) – Log dictionary return only if log==True in parameters.

References

ot.gromov.semirelaxed_gromov_wasserstein(C1, C2, p=None, loss_fun='square_loss', symmetric=None, log=False, G0=None, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Returns the semi-relaxed Gromov-Wasserstein divergence transport from $$(\mathbf{C_1}, \mathbf{p})$$ to $$\mathbf{C_2}$$ (see ).

The function solves the following optimization problem using Conditional Gradient:

\begin{align}\begin{aligned}\mathbf{T}^* \in \mathop{\arg \min}_{\mathbf{T}} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• L: loss function to account for the misfit between the similarity matrices

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – record log if True

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• T (array-like, shape (ns, nt)) –

Coupling between the two spaces that minimizes:

$$\sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}$$

• log (dict) – Convergence information and loss.

References

ot.gromov.semirelaxed_gromov_wasserstein2(C1, C2, p=None, loss_fun='square_loss', symmetric=None, log=False, G0=None, max_iter=10000.0, tol_rel=1e-09, tol_abs=1e-09, **kwargs)[source]

Returns the semi-relaxed Gromov-Wasserstein divergence from $$(\mathbf{C_1}, \mathbf{p})$$ to $$\mathbf{C_2}$$ (see ).

The function solves the following optimization problem using Conditional Gradient:

\begin{align}\begin{aligned}\text{srGW} = \min_{\mathbf{T}} \quad \sum_{i,j,k,l} L(\mathbf{C_1}_{i,k}, \mathbf{C_2}_{j,l}) \mathbf{T}_{i,j} \mathbf{T}_{k,l}\\s.t. \ \mathbf{T} \mathbf{1} &= \mathbf{p}\\ \mathbf{T} &\geq 0\end{aligned}\end{align}

Where :

• $$\mathbf{C_1}$$: Metric cost matrix in the source space

• $$\mathbf{C_2}$$: Metric cost matrix in the target space

• $$\mathbf{p}$$: distribution in the source space

• L: loss function to account for the misfit between the similarity matrices

Note that when using backends, this loss function is differentiable wrt the matrices (C1, C2) but not yet for the weights p.

Note

This function is backend-compatible and will work on arrays from all compatible backends. However all the steps in the conditional gradient are not differentiable.

Parameters:
• C1 (array-like, shape (ns, ns)) – Metric cost matrix in the source space

• C2 (array-like, shape (nt, nt)) – Metric cost matrix in the target space

• p (array-like, shape (ns,), optional) – Distribution in the source space. If let to its default value None, uniform distribution is taken.

• loss_fun (str) – loss function used for the solver either ‘square_loss’ or ‘kl_loss’. ‘kl_loss’ is not implemented yet and will raise an error.

• symmetric (bool, optional) – Either C1 and C2 are to be assumed symmetric or not. If let to its default None value, a symmetry test will be conducted. Else if set to True (resp. False), C1 and C2 will be assumed symmetric (resp. asymmetric).

• verbose (bool, optional) – Print information along iterations

• log (bool, optional) – record log if True

• G0 (array-like, shape (ns,nt), optional) – If None the initial transport plan of the solver is pq^T. Otherwise G0 must satisfy marginal constraints and will be used as initial transport of the solver.

• max_iter (int, optional) – Max number of iterations

• tol_rel (float, optional) – Stop threshold on relative error (>0)

• tol_abs (float, optional) – Stop threshold on absolute error (>0)

• **kwargs (dict) – parameters can be directly passed to the ot.optim.cg solver

Returns:

• srgw (float) – Semi-relaxed Gromov-Wasserstein divergence

• log (dict) – convergence information and Coupling matrix

References

ot.gromov.solve_gromov_linesearch(G, deltaG, cost_G, C1, C2, M, reg, alpha_min=None, alpha_max=None, nx=None, **kwargs)[source]

Solve the linesearch in the FW iterations

Parameters:
• G (array-like, shape(ns,nt)) – The transport map at a given iteration of the FW

• deltaG (array-like (ns,nt)) – Difference between the optimal map found by linearization in the FW algorithm and the value at a given iteration

• cost_G (float) – Value of the cost at G

• C1 (array-like (ns,ns), optional) – Structure matrix in the source domain.

• C2 (array-like (nt,nt), optional) – Structure matrix in the target domain.

• M (array-like (ns,nt)) – Cost matrix between the features.

• reg (float) – Regularization parameter.

• alpha_min (float, optional) – Minimum value for alpha

• alpha_max (float, optional) – Maximum value for alpha

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

• alpha (float) – The optimal step size of the FW

• fc (int) – nb of function call. Useless here

• cost_G (float) – The value of the cost for the next iteration

References

ot.gromov.solve_semirelaxed_gromov_linesearch(G, deltaG, cost_G, C1, C2, ones_p, M, reg, alpha_min=None, alpha_max=None, nx=None, **kwargs)[source]

Solve the linesearch in the Conditional Gradient iterations for the semi-relaxed Gromov-Wasserstein divergence.

Parameters:
• G (array-like, shape(ns,nt)) – The transport map at a given iteration of the FW

• deltaG (array-like (ns,nt)) – Difference between the optimal map found by linearization in the FW algorithm and the value at a given iteration

• cost_G (float) – Value of the cost at G

• C1 (array-like (ns,ns)) – Structure matrix in the source domain.

• C2 (array-like (nt,nt)) – Structure matrix in the target domain.

• ones_p (array-like (ns,1)) – Array of ones of size ns

• M (array-like (ns,nt)) – Cost matrix between the features.

• reg (float) – Regularization parameter.

• alpha_min (float, optional) – Minimum value for alpha

• alpha_max (float, optional) – Maximum value for alpha

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

• alpha (float) – The optimal step size of the FW

• fc (int) – nb of function call. Useless here

• cost_G (float) – The value of the cost for the next iteration

References

ot.gromov.tensor_product(constC, hC1, hC2, T, nx=None)[source]

Return the tensor for Gromov-Wasserstein fast computation

The tensor is computed as described in Proposition 1 Eq. (6) in 

Parameters:
• constC (array-like, shape (ns, nt)) – Constant $$\mathbf{C}$$ matrix in Eq. (6)

• hC1 (array-like, shape (ns, ns)) – $$\mathbf{h1}(\mathbf{C1})$$ matrix in Eq. (6)

• hC2 (array-like, shape (nt, nt)) – $$\mathbf{h2}(\mathbf{C2})$$ matrix in Eq. (6)

• nx (backend, optional) – If let to its default value None, a backend test will be conducted.

Returns:

tens$$\mathcal{L}(\mathbf{C_1}, \mathbf{C_2}) \otimes \mathbf{T}$$ tensor-matrix multiplication result

Return type:

array-like, shape (ns, nt)

References

ot.gromov.update_feature_matrix(lambdas, Ys, Ts, p)[source]

Updates the feature with respect to the S $$\mathbf{T}_s$$ couplings.

See “Solving the barycenter problem with Block Coordinate Descent (BCD)” in  calculated at each iteration

Parameters:
• p (array-like, shape (N,)) – masses in the targeted barycenter

• lambdas (list of float) – List of the S spaces’ weights

• Ts (list of S array-like, shape (ns,N)) – The S $$\mathbf{T}_s$$ couplings calculated at each iteration

• Ys (list of S array-like, shape (d,ns)) – The features.

Returns:

X

Return type:

array-like, shape (d, N)

References

ot.gromov.update_kl_loss(p, lambdas, T, Cs)[source]

Updates $$\mathbf{C}$$ according to the KL Loss kernel with the S $$\mathbf{T}_s$$ couplings calculated at each iteration

Parameters:
• p (array-like, shape (N,)) – Weights in the targeted barycenter.

• lambdas (list of float) – List of the S spaces’ weights

• T (list of S array-like of shape (ns,N)) – The S $$\mathbf{T}_s$$ couplings calculated at each iteration.

• Cs (list of S array-like, shape(ns,ns)) – Metric cost matrices.

Returns:

C – updated $$\mathbf{C}$$ matrix

Return type:

array-like, shape (ns, ns)

ot.gromov.update_square_loss(p, lambdas, T, Cs)[source]

Updates $$\mathbf{C}$$ according to the L2 Loss kernel with the S $$\mathbf{T}_s$$ couplings calculated at each iteration

Parameters:
• p (array-like, shape (N,)) – Masses in the targeted barycenter.

• lambdas (list of float) – List of the S spaces’ weights.

• T (list of S array-like of shape (ns,N)) – The S $$\mathbf{T}_s$$ couplings calculated at each iteration.

• Cs (list of S array-like, shape(ns,ns)) – Metric cost matrices.

Returns:

C – Updated $$\mathbf{C}$$ matrix.

Return type:

array-like, shape (nt, nt)