Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

It's All Relative: Relative Uncertainty in Latent Spaces using Relative Representations

Fabian Mager · Valentino Maiorca · Lars Kai Hansen

Keywords: [ Relative Representations ] [ Uncertainty ] [ XAI ]


Abstract:

Many explainable artificial intelligence (XAI) methods investigate the embedding space of a given neural network. Uncertainty quantification in these spaces can lead to a better understanding of the mechanisms learned by the given network. When concerned with the uncertainty of functions in latent spaces we can invoke ensembles of trained models. Such ensembles can be confounded by reparameterization, i.e., lack of identifiability. We consider two mechanisms for reducing reparametrization "noise", one based on relative representations and one based on interpolation in weight space. By sampling embedding spaces along a curve connecting two fully converged networks without an increase in loss, we show that the latent uncertainty becomes overestimated when comparing embedding spaces without considering the reparametrization issue. By changing the absolute embedding space to a space of relative proximity, we show that the spaces become aligned, and the measured uncertainty decreases. Using this method, we show that the most non-trivial changes to the latent space occur around the midpoint of the curve connecting the independently trained networks.

Chat is not available.