Skip to yearly menu bar Skip to main content


Poster

Penalty-based Methods for Simple Bilevel Optimization under Hölderian Error Bounds

Pengyu Chen · Xu Shi · Rujun Jiang · Jiulin Wang

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract: This paper investigates simple bilevel optimization problems where we minimize a convex upper-level objective over the optimal solution set of a convex lower-level objective. Existing methods for such problems either only guarantee asymptotic convergence, have slow sublinear rates, or require strong assumptions. To address these challenges, we propose a penalization framework that delineates the relationship between approximate solutions of the original problem and its reformulated counterparts. This framework accommodates varying assumptions regarding smoothness and convexity, enabling the application of specific methods with different complexity results. Specifically, when both upper- and lower-level objectives are composite convex functions, under an $\alpha$-Hölderian error bound condition and certain mild assumptions, our algorithm attains an $(\epsilon,\epsilon^{\beta})$-optimal solution of the original problem for any $\beta> 0$ within $\mathcal{O}\left(\sqrt{{1}/{\epsilon^{\max\\{\alpha,\beta\\}}}}\right)$ iterations. The result can be improved further if the smooth part of the upper-level objective is strongly convex. We also establish complexity results when the upper- and lower-level objectives are general nonsmooth functions. Numerical experiments demonstrate the effectiveness of our algorithms.

Live content is unavailable. Log in and register to view live content