ot.dr

Dimension reduction with OT

Warning

Note that by default the module is not imported in ot. In order to use it you need to explicitely import ot.dr

Functions

ot.dr.dist(x1, x2)[source]

Compute squared euclidean distance between samples (autograd)

ot.dr.fda(X, y, p=2, reg=1e-16)[source]

Fisher Discriminant Analysis

Parameters
  • X (ndarray, shape (n, d)) – Training samples.

  • y (ndarray, shape (n,)) – Labels for training samples.

  • p (int, optional) – Size of dimensionnality reduction.

  • reg (float, optional) – Regularization term >0 (ridge regularization)

Returns

  • P (ndarray, shape (d, p)) – Optimal transportation matrix for the given parameters

  • proj (callable) – projection function including mean centering

Examples using ot.dr.fda

ot.dr.logsumexp(M, axis)[source]

Log-sum-exp reduction compatible with autograd (no numpy implementation)

ot.dr.projection_robust_wasserstein(X, Y, a, b, tau, U0=None, reg=0.1, k=2, stopThr=0.001, maxiter=100, verbose=0)[source]

Projection Robust Wasserstein Distance [32]

The function solves the following optimization problem:

\[\max_{U \in St(d, k)} \ \min_{\pi \in \Pi(\mu,\nu)} \quad \sum_{i,j} \pi_{i,j} \|U^T(\mathbf{x}_i - \mathbf{y}_j)\|^2 - \mathrm{reg} \cdot H(\pi)\]
  • \(U\) is a linear projection operator in the Stiefel(d, k) manifold

  • \(H(\pi)\) is entropy regularizer

  • \(\mathbf{x}_i\), \(\mathbf{y}_j\) are samples of measures \(\mu\) and \(\nu\) respectively

Parameters
  • X (ndarray, shape (n, d)) – Samples from measure \(\mu\)

  • Y (ndarray, shape (n, d)) – Samples from measure \(\nu\)

  • a (ndarray, shape (n, )) – weights for measure \(\mu\)

  • b (ndarray, shape (n, )) – weights for measure \(\nu\)

  • tau (float) – stepsize for Riemannian Gradient Descent

  • U0 (ndarray, shape (d, p)) – Initial starting point for projection.

  • reg (float, optional) – Regularization term >0 (entropic regularization)

  • k (int) – Subspace dimension

  • stopThr (float, optional) – Stop threshold on error (>0)

  • verbose (int, optional) – Print information along iterations.

Returns

  • pi (ndarray, shape (n, n)) – Optimal transportation matrix for the given parameters

  • U (ndarray, shape (d, k)) – Projection operator.

References

32

Huang, M. , Ma S. & Lai L. (2021). A Riemannian Block Coordinate Descent Method for Computing the Projection Robust Wasserstein Distance, ICML.

ot.dr.sinkhorn(w1, w2, M, reg, k)[source]

Sinkhorn algorithm with fixed number of iteration (autograd)

ot.dr.sinkhorn_log(w1, w2, M, reg, k)[source]

Sinkhorn algorithm in log-domain with fixed number of iteration (autograd)

ot.dr.split_classes(X, y)[source]

split samples in \(\mathbf{X}\) by classes in \(\mathbf{y}\)

ot.dr.wda(X, y, p=2, reg=1, k=10, solver=None, sinkhorn_method='sinkhorn', maxiter=100, verbose=0, P0=None, normalize=False)[source]

Wasserstein Discriminant Analysis [11]

The function solves the following optimization problem:

\[\mathbf{P} = \mathop{\arg \min}_\mathbf{P} \quad \frac{\sum\limits_i W(P \mathbf{X}^i, P \mathbf{X}^i)}{\sum\limits_{i, j \neq i} W(P \mathbf{X}^i, P \mathbf{X}^j)}\]

where :

  • \(P\) is a linear projection operator in the Stiefel(p, d) manifold

  • \(W\) is entropic regularized Wasserstein distances

  • \(\mathbf{X}^i\) are samples in the dataset corresponding to class i

Choosing a Sinkhorn solver

By default and when using a regularization parameter that is not too small the default sinkhorn solver should be enough. If you need to use a small regularization to get sparse cost matrices, you should use the ot.dr.sinkhorn_log() solver that will avoid numerical errors, but can be slow in practice.

Parameters
  • X (ndarray, shape (n, d)) – Training samples.

  • y (ndarray, shape (n,)) – Labels for training samples.

  • p (int, optional) – Size of dimensionnality reduction.

  • reg (float, optional) – Regularization term >0 (entropic regularization)

  • solver (Nonestr, optional) – None for steepest descent or ‘TrustRegions’ for trust regions algorithm else should be a pymanopt.solvers

  • sinkhorn_method (str) – method used for the Sinkhorn solver, either ‘sinkhorn’ or ‘sinkhorn_log’

  • P0 (ndarray, shape (d, p)) – Initial starting point for projection.

  • normalize (bool, optional) – Normalise the Wasserstaiun distance by the average distance on P0 (default : False)

  • verbose (int, optional) – Print information along iterations.

Returns

  • P (ndarray, shape (d, p)) – Optimal transportation matrix for the given parameters

  • proj (callable) – Projection function including mean centering.

References

11

Flamary, R., Cuturi, M., Courty, N., & Rakotomamonjy, A. (2016). Wasserstein Discriminant Analysis. arXiv preprint arXiv:1608.08063.

Examples using ot.dr.wda