.. DO NOT EDIT. .. THIS FILE WAS AUTOMATICALLY GENERATED BY SPHINX-GALLERY. .. TO MAKE CHANGES, EDIT THE SOURCE PYTHON FILE: .. "auto_examples/backends/plot_Sinkhorn_gradients.py" .. LINE NUMBERS ARE GIVEN BELOW. .. only:: html .. note:: :class: sphx-glr-download-link-note :ref:`Go to the end ` to download the full example code. .. rst-class:: sphx-glr-example-title .. _sphx_glr_auto_examples_backends_plot_Sinkhorn_gradients.py: ================================================ Different gradient computations for regularized optimal transport ================================================ This example illustrates the differences in terms of computation time between the gradient options for the Sinkhorn solver. .. GENERATED FROM PYTHON SOURCE LINES 10-22 .. code-block:: Python # Author: Sonia Mazelet # # License: MIT License # sphinx_gallery_thumbnail_number = 1 import matplotlib.pylab as pl import ot from ot.backend import torch .. GENERATED FROM PYTHON SOURCE LINES 23-25 Time comparison of the Sinkhorn solver for different gradient options ------------- .. GENERATED FROM PYTHON SOURCE LINES 28-86 .. code-block:: Python n_trials = 10 times_autodiff = torch.zeros(n_trials) times_envelope = torch.zeros(n_trials) times_last_step = torch.zeros(n_trials) n_samples_s = 300 n_samples_t = 300 n_features = 5 reg = 0.03 # Time required for the Sinkhorn solver and gradient computations, for different gradient options over multiple Gaussian distributions for i in range(n_trials): x = torch.rand((n_samples_s, n_features)) y = torch.rand((n_samples_t, n_features)) a = ot.utils.unif(n_samples_s) b = ot.utils.unif(n_samples_t) M = ot.dist(x, y) a = torch.tensor(a, requires_grad=True) b = torch.tensor(b, requires_grad=True) M = M.clone().detach().requires_grad_(True) # autodiff provides the gradient for all the outputs (plan, value, value_linear) ot.tic() res_autodiff = ot.solve(M, a, b, reg=reg, grad="autodiff") res_autodiff.value.backward() times_autodiff[i] = ot.toq() a = a.clone().detach().requires_grad_(True) b = b.clone().detach().requires_grad_(True) M = M.clone().detach().requires_grad_(True) # envelope provides the gradient for value ot.tic() res_envelope = ot.solve(M, a, b, reg=reg, grad="envelope") res_envelope.value.backward() times_envelope[i] = ot.toq() a = a.clone().detach().requires_grad_(True) b = b.clone().detach().requires_grad_(True) M = M.clone().detach().requires_grad_(True) # last_step provides the gradient for all the outputs, but only for the last iteration of the Sinkhorn algorithm ot.tic() res_last_step = ot.solve(M, a, b, reg=reg, grad="last_step") res_last_step.value.backward() times_last_step[i] = ot.toq() pl.figure(1, figsize=(5, 3)) pl.ticklabel_format(axis="y", style="sci", scilimits=(0, 0)) pl.boxplot( ([times_autodiff, times_envelope, times_last_step]), tick_labels=["autodiff", "envelope", "last_step"], showfliers=False, ) pl.ylabel("Time (s)") pl.show() .. image-sg:: /auto_examples/backends/images/sphx_glr_plot_Sinkhorn_gradients_001.png :alt: plot Sinkhorn gradients :srcset: /auto_examples/backends/images/sphx_glr_plot_Sinkhorn_gradients_001.png :class: sphx-glr-single-img .. rst-class:: sphx-glr-timing **Total running time of the script:** (0 minutes 55.443 seconds) .. _sphx_glr_download_auto_examples_backends_plot_Sinkhorn_gradients.py: .. only:: html .. container:: sphx-glr-footer sphx-glr-footer-example .. container:: sphx-glr-download sphx-glr-download-jupyter :download:`Download Jupyter notebook: plot_Sinkhorn_gradients.ipynb ` .. container:: sphx-glr-download sphx-glr-download-python :download:`Download Python source code: plot_Sinkhorn_gradients.py ` .. container:: sphx-glr-download sphx-glr-download-zip :download:`Download zipped: plot_Sinkhorn_gradients.zip ` .. only:: html .. rst-class:: sphx-glr-signature `Gallery generated by Sphinx-Gallery `_