Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

Topology Preserving Regularization for Independent Training of Inter-operable Models

Nicolas Zilberstein · Akshay Malhotra · Shahab Hamidi-Rad · Yugeswar Deenoo

Keywords: [ Topological autoencoders ] [ Latent space alignment ] [ Zero-shot learning ]


Abstract:

Developing schemes to enable zero-shot stitching between different neural networks with minimal or no information exchange has become increasingly important in the era of large and powerful pre-trained models. Considering the example of an auto-encoder based data compression framework, having the ability to select the architecture and train the encoder and decoder models completely independently while ensuring interoperability between them can revolutionize how these models are developed, deployed, and maintained.In this work, we propose a novel approach that utilizes topological regularizations to align the latent spaces of two different auto-encoder models that can be trained independently, without coordination. Our solution introduces two distinct training schemes: Data2Latent and Latent2Latent.The Data2Latent scheme focuses on preserving the topological structure of the input data, while the Latent2Latent scheme preserves the latent space of a pre-trained, unconstrained model. Through numerical experiments in reconstruction tasks, we demonstrate that our approach yields a near-optimal solution, closely approximating the performance of an end-to-end model.

Chat is not available.