Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Stochastic Quasi-Variational Inequalities: Convergence Analysis Beyond Strong Monotonicity

zeinab alizadeh · Afrooz Jalilzadeh


Abstract:

Variational Inequality is a well-established framework for Nash equilibrium and saddle-point problems. However, its generalization, Quasi-Variational Inequalities, where the constraint set depends on the decision variable, is less understood, with existing results focused on strongly monotone cases. This paper proposes an extra-gradient method for a class of monotone Stochastic Quasi-Variational Inequality (SQVI) and provides the first convergence rate analysis for the non-strongly monotone setting. Our approach not only advances the theoretical understanding of SQVI but also demonstrates its practical applicability.

Chat is not available.