Look-ahead Meta Learning for Continual Learning
Gunshi Gupta, Karmesh Yadav, Liam Paull
Oral presentation: Orals & Spotlights Track 16: Continual/Meta/Misc Learning
on 2020-12-09T06:15:00-08:00 - 2020-12-09T06:30:00-08:00
on 2020-12-09T06:15:00-08:00 - 2020-12-09T06:30:00-08:00
Poster Session 4 (more posters)
on 2020-12-09T09:00:00-08:00 - 2020-12-09T11:00:00-08:00
GatherTown: Continual and meta-learning ( Town D3 - Spot A3 )
on 2020-12-09T09:00:00-08:00 - 2020-12-09T11:00:00-08:00
GatherTown: Continual and meta-learning ( Town D3 - Spot A3 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: The continual learning problem involves training models with limited capacity to perform well on a set of an unknown number of sequentially arriving tasks. While meta-learning shows great potential for reducing interference between old and new tasks, the current training procedures tend to be either slow or offline, and sensitive to many hyper-parameters. In this work, we propose Look-ahead MAML (La-MAML), a fast optimisation-based meta-learning algorithm for online-continual learning, aided by a small episodic memory. By incorporating the modulation of per-parameter learning rates in our meta-learning update, our approach also allows us to draw connections to and exploit prior work on hypergradients and meta-descent. This provides a more flexible and efficient way to mitigate catastrophic forgetting compared to conventional prior-based methods. La-MAML achieves performance superior to other replay-based, prior-based and meta-learning based approaches for continual learning on real-world visual classification benchmarks.