Poster
in
Workshop: UniReps: Unifying Representations in Neural Models
Connecting Neural Models Latent Geometries with Relative Geodesic Representations
Hanlin Yu · Berfin Inal · Marco Fumero
Keywords: [ representation learning ] [ Representation alignment ] [ Latent space geometry ]
Neural models learn representations of high dimensional data that lie on low dimensional manifolds. Multiple factors, including stochasticities in the training process, may induce different representations, even when learning the same task on the same data. However, when there exist a latent structure shared between different representational spaces,it has been showed that is possible to model a transformation between them. In this work, we show how by leveraging the differential geometrical structure of latent spaces of neural models, it is possible to capture precisely the transformations between distinct latent spaces. We validate experimentally our method on autoencoder models and real pretrained foundational vision models across diverse architectures, initializations and tasks.