Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

SICNN: Sparsity-induced Input Convex Neural Network for Optimal Transport

Peter Chen · Yue Xie · Qingpeng Zhang


Abstract: Optimal Transport (OT) theory seeks to find the map $T: X \to Y$ that transports the source measure $X$ to the target measure $Y$, while minimizing the cost $C(\mathbf{x}, T(\mathbf{x}))$ between a point $\mathbf{x}$ and its image $T(\mathbf{x})$. Building on the previous work of the Input Convex Neural Network (ICNN) OT solver, and drawing inspiration from the concept of displacement-sparse maps, we introduce a sparsity penalty into the ICNN to promote sparsity in the displacement vectors $\Delta(\mathbf{x}) = T(\mathbf{x}) - \mathbf{x}$, improving the interpretability of the resulting map. However, a side effect of increased sparsity is reduced feasibility, meaning $T(X)$ may deviate more significantly from the actual target measure. In the low-dimensional setting, we propose a heuristic framework to balance the trade-off between the sparsity and feasibility of the map. This framework dynamically adjusts the sparsity-inducing intensity based on the evaluation of maps learned over different iterations. In the high-dimensional setting, we directly constrain the dimensionality of the displacement vectors, i.e., for $X \in \mathbb{R}^d$, $\forall \mathbf{x} \in X$, we enforce $\dim(\Delta(\mathbf{x})) \leq l$, where $l \ll d$. Among all maps that satisfy this constraint, we aim to find the most feasible map. We demonstrate that this formulation can be novelly solved using our heuristic adjustment framework without resorting to dimensionality reduction.

Chat is not available.