Skip to yearly menu bar Skip to main content


Poster

GACL: Exemplar-Free Generalized Analytic Continual Learning

HUIPING ZHUANG · Yizhu Chen · Di Fang · Run He · Kai Tong · Hongxin Wei · Ziqian Zeng · Cen Chen

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Class incremental learning (CIL) trains a network on sequential tasks with separated categories in each task but suffers from catastrophic forgetting, where models quickly lose previously learned knowledge when acquiring new tasks. The generalized CIL (GCIL) aims to address the CIL problem in a more real-world scenario, where incoming data have mixed data categories and unknown sample size distribution. Existing attempts for the GCIL either have poor performance or invade data privacy by saving exemplars. In this paper, we propose a new exemplar-free GCIL technique named generalized analytic continual learning (GACL). The GACL adopts analytic learning (a gradient-free training technique), and delivers an analytical (i.e., closed-form) solution to the GCIL scenario. This solution is derived via decomposing the incoming data into exposed and unexposed classes, thereby attaining a weight-invariant property, a rare yet valuable property supporting an equivalence between the incremental learning and its joint training. Such an equivalence is crucial in GCIL settings as data distributions among different tasks no longer pose challenges adopting our GACL. Theoretically, this property of equivalence is validated through matrix analysis tools. Empirically, we conduct extensive experiments, where, compared with existing GCIL methods, our GACL exhibits a consistently leading performance across various datasets and GCIL settings.

Live content is unavailable. Log in and register to view live content