Poster
Theoretical Investigations and Practical Enhancements on Tail Task Risk Minimization in Meta Learning
Yiqin Lv · Qi Wang · Dong Liang · Zheng Xie
Meta learning is a promising paradigm in the era of large models and task distributional robustness has become an indispensable consideration in real-world scenarios.Recent advances have examined the effectiveness of tail task risk minimization in fast adaptation robustness improvement \citep{wang2023simple}.This work contributes to more theoretical investigations and practical enhancements in the field.Specifically, we reduce the distributionally robust strategy to a max-min optimization problem, constitute the Stackelberg equilibrium as the solution concept, and estimate the convergence rate.In the presence of tail risk, we further derive the generalization bound, establish connections with estimated quantiles, and practically improve the studied strategy.Accordingly, extensive evaluations demonstrate the significance of our proposal in boosting robustness.
Live content is unavailable. Log in and register to view live content