Poster
in
Workshop: Optimization for ML Workshop
Accelerated Stability in Performative Prediction
Pedram Khorsandi · Rushil Gupta · Mehrnaz Mofakhami · Simon Lacoste-Julien · Gauthier Gidel
In performative prediction, where deployed models influence the data distribution, ensuring rapid convergence to a stable solution is crucial, especially in evolving environments. This paper extends the Repeated Risk Minimization (RRM) framework by utilizing historical datasets from previous retraining snapshots, enabling accelerated convergence to a performatively stable point. Our contributions include proving the tightness of existing bounds, introducing a novel framework for improved convergence, and providing both theoretical guarantees and empirical results to validate our approach. Experimental evaluations demonstrate that aggregating past snapshots significantly reduces performative loss shifts and accelerates stability, making the method particularly effective in dynamic settings where distributions evolve with each model update.