Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design

A scalable Bayesian continual learning framework for online and sequential decision making

Hanwen Xing · Christopher Yau

Keywords: [ sequential decision making ] [ Gaussian process ] [ mixture of experts ]


Abstract:

Continual learning (CL) refers to the ability to continually learn and exploit new knowledge while retaining experiences accumulated from past. Though numerous CL methods have been proposed in recent years, it is not straightforward to deployed them directly to online or sequential decision making problems due to the computational burden and the lack of uncertainty quantification. In this paper, we focus on Instance-incremental classification problems with concept shift, and propose an online/sequential decision making model based on a novel scalable Bayesian continual learning framework that provides i) statistically principled and computationally efficient Bayesian knowledge updating scheme and ii) scalable and exact posterior inference procedure based on a Mixture of Experts model. In addition, as an exemplar-free method, our method does not require storing or modelling any previously seen instances, making it appealing to e.g. online decision making problems in biomedical applications where data privacy is of concern.

Chat is not available.