Poster
in
Workshop: OPT 2023: Optimization for Machine Learning
Nesterov Meets Robust Multitask Learning Twice
Yifan Kang · Kai Liu
Abstract:
In this paper, we study temporal multitask learning problem where we impose smoothness constraint on time-series weights. Besides, to select important features, group lasso is introduced. Moreover, the regression loss in each time frame is non-squared to alleviate the influence of various scales of noise in each task, in addition to the nuclear norm for low-rank property. We first formulate the objective as a max-min problem, where the dual variable can be optimized via accelerated dual ascent method, while the primal variable can be solved via smoothed Fast Iterative Shrinkage-Thresholding Algorithm (S-FISTA). We provide convergence analysis of the proposed method and experiments demonstrate its effectiveness.
Chat is not available.