Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
Controlling Forgetting with Test-Time Data in Continual Learning
Vaibhav Singh · Rahaf Aljundi · Eugene Belilovsky
Foundational vision-language models excel in various tasks but require updates as new tasks or domains emerge. Current Continual Learning (CL) methods, which focus on supervised training, often suffer from significant forgetting, performing worse than the original models in zero-shot scenarios. This work proposes leveraging test-time, unsupervised data in a self-supervised manner to refresh the model’s memory of previously learned tasks, minimizing forgetting without additional labeling. By introducing a student-teacher framework with gradient-based sparse parameter updates, the approach enhances performance on prior tasks and reduces reliance on offline memory buffers, effectively improving continual learning outcomes.