ot.sliced
Sliced OT Distances
Functions
- ot.sliced.get_random_projections(d, n_projections, seed=None, backend=None, type_as=None)[source]
Generates n_projections samples from the uniform on the unit sphere of dimension \(d-1\): \(\mathcal{U}(\mathcal{S}^{d-1})\)
- Parameters:
- Returns:
out – The uniform unit vectors on the sphere
- Return type:
ndarray, shape (d, n_projections)
Examples
>>> n_projections = 100 >>> d = 5 >>> projs = get_random_projections(d, n_projections) >>> np.allclose(np.sum(np.square(projs), 0), 1.) True
- ot.sliced.max_sliced_wasserstein_distance(X_s, X_t, a=None, b=None, n_projections=50, p=2, projections=None, seed=None, log=False)[source]
Computes a Monte-Carlo approximation of the max p-Sliced Wasserstein distance
\[\mathcal{Max-SWD}_p(\mu, \nu) = \underset{\theta _in \mathcal{U}(\mathbb{S}^{d-1})}{\max} [\mathcal{W}_p^p(\theta_\# \mu, \theta_\# \nu)]^{\frac{1}{p}}\]where :
\(\theta_\# \mu\) stands for the pushforwards of the projection \(\mathbb{R}^d \ni X \mapsto \langle \theta, X \rangle\)
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – samples in the source domain
X_t (ndarray, shape (n_samples_b, dim)) – samples in the target domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
b (ndarray, shape (n_samples_b,), optional) – samples weights in the target domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
p (float, optional =) – Power p used for computing the sliced Wasserstein
projections (shape (dim, n_projections), optional) – Projection matrix (n_projections and seed are not used in this case)
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_distance returns the projections used and their associated EMD.
- Returns:
cost (float) – Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> n_samples_a = 20 >>> X = np.random.normal(0., 1., (n_samples_a, 5)) >>> sliced_wasserstein_distance(X, X, seed=0) 0.0
References
- ot.sliced.sliced_wasserstein_distance(X_s, X_t, a=None, b=None, n_projections=50, p=2, projections=None, seed=None, log=False)[source]
Computes a Monte-Carlo approximation of the p-Sliced Wasserstein distance
\[\mathcal{SWD}_p(\mu, \nu) = \underset{\theta \sim \mathcal{U}(\mathbb{S}^{d-1})}{\mathbb{E}}\left(\mathcal{W}_p^p(\theta_\# \mu, \theta_\# \nu)\right)^{\frac{1}{p}}\]where :
\(\theta_\# \mu\) stands for the pushforwards of the projection \(X \in \mathbb{R}^d \mapsto \langle \theta, X \rangle\)
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – samples in the source domain
X_t (ndarray, shape (n_samples_b, dim)) – samples in the target domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
b (ndarray, shape (n_samples_b,), optional) – samples weights in the target domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
p (float, optional =) – Power p used for computing the sliced Wasserstein
projections (shape (dim, n_projections), optional) – Projection matrix (n_projections and seed are not used in this case)
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_distance returns the projections used and their associated EMD.
- Returns:
cost (float) – Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> n_samples_a = 20 >>> X = np.random.normal(0., 1., (n_samples_a, 5)) >>> sliced_wasserstein_distance(X, X, seed=0) 0.0
References
[31] Bonneel, Nicolas, et al. “Sliced and radon wasserstein barycenters of measures.” Journal of Mathematical Imaging and Vision 51.1 (2015): 22-45
- ot.sliced.sliced_wasserstein_sphere(X_s, X_t, a=None, b=None, n_projections=50, p=2, projections=None, seed=None, log=False)[source]
Compute the spherical sliced-Wasserstein discrepancy.
\[SSW_p(\mu,\nu) = \left(\int_{\mathbb{V}_{d,2}} W_p^p(P^U_\#\mu, P^U_\#\nu)\ \mathrm{d}\sigma(U)\right)^{\frac{1}{p}}\]where:
\(P^U_\# \mu\) stands for the pushforwards of the projection \(\forall x\in S^{d-1},\ P^U(x) = \frac{U^Tx}{\|U^Tx\|_2}\)
The function runs on backend but tensorflow and jax are not supported.
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – Samples in the source domain
X_t (ndarray, shape (n_samples_b, dim)) – Samples in the target domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
b (ndarray, shape (n_samples_b,), optional) – samples weights in the target domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
p (float, optional (default=2)) – Power p used for computing the spherical sliced Wasserstein
projections (shape (n_projections, dim, 2), optional) – Projection matrix (n_projections and seed are not used in this case)
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_sphere returns the projections used and their associated EMD.
- Returns:
cost (float) – Spherical Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> n_samples_a = 20 >>> X = np.random.normal(0., 1., (n_samples_a, 5)) >>> X = X / np.sqrt(np.sum(X**2, -1, keepdims=True)) >>> sliced_wasserstein_sphere(X, X, seed=0) 0.0
References
[46] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). Spherical sliced-wasserstein. International Conference on Learning Representations.
- ot.sliced.sliced_wasserstein_sphere_unif(X_s, a=None, n_projections=50, seed=None, log=False)[source]
Compute the 2-spherical sliced wasserstein w.r.t. a uniform distribution.
\[SSW_2(\mu_n, \nu)\]where
\(\mu_n=\sum_{i=1}^n \alpha_i \delta_{x_i}\)
\(\nu=\mathrm{Unif}(S^1)\)
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – Samples in the source domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_distance returns the projections used and their associated EMD.
- Returns:
cost (float) – Spherical Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> np.random.seed(42) >>> x0 = np.random.randn(500,3) >>> x0 = x0 / np.sqrt(np.sum(x0**2, -1, keepdims=True)) >>> ssw = sliced_wasserstein_sphere_unif(x0, seed=42) >>> np.allclose(sliced_wasserstein_sphere_unif(x0, seed=42), 0.01734, atol=1e-3) True
References:
[46] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). Spherical sliced-wasserstein. International Conference on Learning Representations.
- ot.sliced.get_random_projections(d, n_projections, seed=None, backend=None, type_as=None)[source]
Generates n_projections samples from the uniform on the unit sphere of dimension \(d-1\): \(\mathcal{U}(\mathcal{S}^{d-1})\)
- Parameters:
- Returns:
out – The uniform unit vectors on the sphere
- Return type:
ndarray, shape (d, n_projections)
Examples
>>> n_projections = 100 >>> d = 5 >>> projs = get_random_projections(d, n_projections) >>> np.allclose(np.sum(np.square(projs), 0), 1.) True
- ot.sliced.max_sliced_wasserstein_distance(X_s, X_t, a=None, b=None, n_projections=50, p=2, projections=None, seed=None, log=False)[source]
Computes a Monte-Carlo approximation of the max p-Sliced Wasserstein distance
\[\mathcal{Max-SWD}_p(\mu, \nu) = \underset{\theta _in \mathcal{U}(\mathbb{S}^{d-1})}{\max} [\mathcal{W}_p^p(\theta_\# \mu, \theta_\# \nu)]^{\frac{1}{p}}\]where :
\(\theta_\# \mu\) stands for the pushforwards of the projection \(\mathbb{R}^d \ni X \mapsto \langle \theta, X \rangle\)
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – samples in the source domain
X_t (ndarray, shape (n_samples_b, dim)) – samples in the target domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
b (ndarray, shape (n_samples_b,), optional) – samples weights in the target domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
p (float, optional =) – Power p used for computing the sliced Wasserstein
projections (shape (dim, n_projections), optional) – Projection matrix (n_projections and seed are not used in this case)
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_distance returns the projections used and their associated EMD.
- Returns:
cost (float) – Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> n_samples_a = 20 >>> X = np.random.normal(0., 1., (n_samples_a, 5)) >>> sliced_wasserstein_distance(X, X, seed=0) 0.0
References
[35] Deshpande, I., Hu, Y. T., Sun, R., Pyrros, A., Siddiqui, N., Koyejo, S., … & Schwing, A. G. (2019). Max-sliced wasserstein distance and its use for gans. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (pp. 10648-10656).
- ot.sliced.sliced_wasserstein_distance(X_s, X_t, a=None, b=None, n_projections=50, p=2, projections=None, seed=None, log=False)[source]
Computes a Monte-Carlo approximation of the p-Sliced Wasserstein distance
\[\mathcal{SWD}_p(\mu, \nu) = \underset{\theta \sim \mathcal{U}(\mathbb{S}^{d-1})}{\mathbb{E}}\left(\mathcal{W}_p^p(\theta_\# \mu, \theta_\# \nu)\right)^{\frac{1}{p}}\]where :
\(\theta_\# \mu\) stands for the pushforwards of the projection \(X \in \mathbb{R}^d \mapsto \langle \theta, X \rangle\)
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – samples in the source domain
X_t (ndarray, shape (n_samples_b, dim)) – samples in the target domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
b (ndarray, shape (n_samples_b,), optional) – samples weights in the target domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
p (float, optional =) – Power p used for computing the sliced Wasserstein
projections (shape (dim, n_projections), optional) – Projection matrix (n_projections and seed are not used in this case)
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_distance returns the projections used and their associated EMD.
- Returns:
cost (float) – Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> n_samples_a = 20 >>> X = np.random.normal(0., 1., (n_samples_a, 5)) >>> sliced_wasserstein_distance(X, X, seed=0) 0.0
References
[31] Bonneel, Nicolas, et al. “Sliced and radon wasserstein barycenters of measures.” Journal of Mathematical Imaging and Vision 51.1 (2015): 22-45
- ot.sliced.sliced_wasserstein_sphere(X_s, X_t, a=None, b=None, n_projections=50, p=2, projections=None, seed=None, log=False)[source]
Compute the spherical sliced-Wasserstein discrepancy.
\[SSW_p(\mu,\nu) = \left(\int_{\mathbb{V}_{d,2}} W_p^p(P^U_\#\mu, P^U_\#\nu)\ \mathrm{d}\sigma(U)\right)^{\frac{1}{p}}\]where:
\(P^U_\# \mu\) stands for the pushforwards of the projection \(\forall x\in S^{d-1},\ P^U(x) = \frac{U^Tx}{\|U^Tx\|_2}\)
The function runs on backend but tensorflow and jax are not supported.
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – Samples in the source domain
X_t (ndarray, shape (n_samples_b, dim)) – Samples in the target domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
b (ndarray, shape (n_samples_b,), optional) – samples weights in the target domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
p (float, optional (default=2)) – Power p used for computing the spherical sliced Wasserstein
projections (shape (n_projections, dim, 2), optional) – Projection matrix (n_projections and seed are not used in this case)
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_sphere returns the projections used and their associated EMD.
- Returns:
cost (float) – Spherical Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> n_samples_a = 20 >>> X = np.random.normal(0., 1., (n_samples_a, 5)) >>> X = X / np.sqrt(np.sum(X**2, -1, keepdims=True)) >>> sliced_wasserstein_sphere(X, X, seed=0) 0.0
References
[46] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). Spherical sliced-wasserstein. International Conference on Learning Representations.
- ot.sliced.sliced_wasserstein_sphere_unif(X_s, a=None, n_projections=50, seed=None, log=False)[source]
Compute the 2-spherical sliced wasserstein w.r.t. a uniform distribution.
\[SSW_2(\mu_n, \nu)\]where
\(\mu_n=\sum_{i=1}^n \alpha_i \delta_{x_i}\)
\(\nu=\mathrm{Unif}(S^1)\)
- Parameters:
X_s (ndarray, shape (n_samples_a, dim)) – Samples in the source domain
a (ndarray, shape (n_samples_a,), optional) – samples weights in the source domain
n_projections (int, optional) – Number of projections used for the Monte-Carlo approximation
seed (int or RandomState or None, optional) – Seed used for random number generator
log (bool, optional) – if True, sliced_wasserstein_distance returns the projections used and their associated EMD.
- Returns:
cost (float) – Spherical Sliced Wasserstein Cost
log (dict, optional) – log dictionary return only if log==True in parameters
Examples
>>> np.random.seed(42) >>> x0 = np.random.randn(500,3) >>> x0 = x0 / np.sqrt(np.sum(x0**2, -1, keepdims=True)) >>> ssw = sliced_wasserstein_sphere_unif(x0, seed=42) >>> np.allclose(sliced_wasserstein_sphere_unif(x0, seed=42), 0.01734, atol=1e-3) True
References:
[46] Bonet, C., Berg, P., Courty, N., Septier, F., Drumetz, L., & Pham, M. T. (2023). Spherical sliced-wasserstein. International Conference on Learning Representations.