Skip to yearly menu bar Skip to main content


Poster

Adaptive Smoothed Online Multi-Task Learning

Keerthiram Murugesan · Hanxiao Liu · Jaime Carbonell · Yiming Yang

Area 5+6+7+8 #84

Keywords: [ Online Learning ] [ Multi-task and Transfer Learning ]


Abstract:

This paper addresses the challenge of jointly learning both the per-task model parameters and the inter-task relationships in a multi-task online learning setting. The proposed algorithm features probabilistic interpretation, efficient updating rules and flexible modulation on whether learners focus on their specific task or on jointly address all tasks. The paper also proves a sub-linear regret bound as compared to the best linear predictor in hindsight. Experiments over three multi-task learning benchmark datasets show advantageous performance of the proposed approach over several state-of-the-art online multi-task learning baselines.

Live content is unavailable. Log in and register to view live content