Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Solving hidden monotone variational inequalities with surrogate losses

Ryan D'Orazio · Danilo Vucetic · Zichu Liu · Junhyung Lyle Kim · Ioannis Mitliagkas · Gauthier Gidel


Abstract:

Deep learning has proven to be effective in a wide variety of loss minimization problems.However, many applications of interest, like minimizing projected Bellman error and min-max optimization, cannot be modelled as minimizing a scalar loss function but instead correspond to solving a variational inequality (VI) problem.This difference in setting has caused many practical challenges as naive gradient-based approaches from supervised learning tend to diverge and cycle in the VI case.In this work, we propose a principled surrogate-based approach compatible with deep learning to solve VIs.We propose a surrogate-based approach that is principled in the VI setting and compatible with deep learning.We show that our approach has three main benefits: (1) it guarantees linear convergence under sufficient descent in the surrogate when hidden monotone structure is present (e.g. convex-concave in with respect to model predictions), (2) it provides a unifying perspective of existing methods, and (3) is amenable to existing deep learning optimizers like ADAM.

Chat is not available.