Poster
in
Workshop: Optimization for ML Workshop
A Stochastic Algorithm for Sinkhorn Distance-Regularized Distributionally Robust Optimization
Yufeng Yang · Yi Zhou · Zhaosong Lu
Abstract:
Distributionally Robust Optimization (DRO) is a powerful modeling technique to tackle the challengecaused by data distribution shifts. This paper focuses on Sinkhorn distance regularized DRO.We generalize Sinkhorn distance allowing broader function choices to model ambiguity set andderive the lagrangian dual taking the form of nested stochastic programming. We also design thealgorithm based on stochastic gradient descent with easy-to-implement constant learning rate. Unlikeprevious work doing algorithm analysis for convex and bounded loss function, our algorithmprovides convergence guarantee for non-convex and possible unbounded loss function under properchoice of sampling batch-size. The resultant sample complexity for finding $\epsilon$-stationary point revealsindependent relationship with data size and parameter dimension, and thus our modeling andalgorithms are suitable for large-scale applications.
Chat is not available.