Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Neural Entropic Multimarginal Optimal Transport

Dor Tsur · Ziv Goldfeld · Kristjan Greenewald · haim permuter


Abstract: Multimarginal optimal transport (MOT) is a powerful framework for modeling interactions between multiple distributions, yet its applicability is bottlenecked by a high computational complexity. Entropic regularization provides computational speedups via an extension of Sinkhorn's algorithm, whose time complexity generally scales as $O(n^k)$, for a dataset size $n$ and $k$ marginals. This dependence on the entire dataset size is prohibitive in high-dimensional problems that require massive datasets. In this work, we propose a new computational framework for MOT, dubbed neural entropic MOT (NEMOT), that enjoys significantly improved scalability. NEMOT employs neural networks trained using mini-batches, which transfers the computational bottleneck from the dataset size to the size of the mini-batch and facilitates EMOT computation in large problems. We provide formal theoretical guarantees on the accuracy of NEMOT via non-asymptotic error bounds that control for the associated approximation (by neural networks) and estimation (from samples) errors. We also provide numerical results that demonstrate the performance gains of NEMOT over Sinkhorn's algorithm. Consequently, NEMOT unlocks the MOT framework for large-scale machine learning.

Chat is not available.