Skip to yearly menu bar Skip to main content


Poster

Non-Gaussian Gaussian Processes for Few-Shot Regression

Marcin Sendera · Jacek Tabor · Aleksandra Nowak · Andrzej Bedychaj · Massimiliano Patacchiola · Tomasz Trzcinski · Przemysław Spurek · Maciej Zieba

Keywords: [ Generative Model ] [ Few Shot Learning ] [ Meta Learning ] [ Kernel Methods ] [ Machine Learning ]


Abstract:

Gaussian Processes (GPs) have been widely used in machine learning to model distributions over functions, with applications including multi-modal regression, time-series prediction, and few-shot learning. GPs are particularly useful in the last application since they rely on Normal distributions and enable closed-form computation of the posterior probability function. Unfortunately, because the resulting posterior is not flexible enough to capture complex distributions, GPs assume high similarity between subsequent tasks - a requirement rarely met in real-world conditions. In this work, we address this limitation by leveraging the flexibility of Normalizing Flows to modulate the posterior predictive distribution of the GP. This makes the GP posterior locally non-Gaussian, therefore we name our method Non-Gaussian Gaussian Processes (NGGPs). More precisely, we propose an invertible ODE-based mapping that operates on each component of the random variable vectors and shares the parameters across all of them. We empirically tested the flexibility of NGGPs on various few-shot learning regression datasets, showing that the mapping can incorporate context embedding information to model different noise levels for periodic functions. As a result, our method shares the structure of the problem between subsequent tasks, but the contextualization allows for adaptation to dissimilarities. NGGPs outperform the competing state-of-the-art approaches on a diversified set of benchmarks and applications.

Chat is not available.