Skip to yearly menu bar Skip to main content


Keynote
in
Workshop: Workshop on Scalable Continual Learning for Lifelong Foundation Models

Keynote 2: Modular Continual Learning

Marc'Aurelio Ranzato

[ ]
Sat 14 Dec 10:30 a.m. PST — 11:20 a.m. PST

Abstract:

The revolution brought by large foundation models is perhaps the biggest opportunity for continual learning. I'll argue that the next step change can be realized via modularity. Modularity can enable adaptation to multiple kinds of change: from the amount of compute available, to temporal changes in the data distribution, and even changes in who is participating in improving the model. I'll anchor the talk on a few works I co-authored on the topic: - T. Veniat, L. Denoyer, M. Ranzato " Efficient Continual Learning with Modular Networks and Task-Driven Priors ". ICLR 2021 - J. Bornschein, A. Galashov, R. Hemsley, A. Rannen-Triki, Y. Chen, A. Chaudhry, X. He, A. Douillard, M. Caccia, Q. Feng, J. Shen, S. Rebuffi, K. Stacpoole, D. de las Casas, W. Hawkins, A. Lazaridou, Y.W. Teh, A.A. Rusu, R. Pascanu, M. Ranzato "NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision Research". JMLR 2022 - Arthur Douillard, Qixuan Feng, Andrei A Rusu, Adhiguna Kuncoro, Yani Donchev, Rachita Chhaparia, Ionel Gog, Marc'Aurelio Ranzato, Jiajun Shen, Arthur Szlam "DiPaCo: Distributed Path Composition " arXiv preprint arXiv:2403.10616

Chat is not available.