Skip to yearly menu bar Skip to main content


Poster
in
Workshop: UniReps: Unifying Representations in Neural Models

Rethinking Fine-tuning Through Geometric Perspective

Krishna Sri Ipsit Mantri · Moshe Eliasof · Carola-Bibiane Schönlieb · Bruno Ribeiro

Keywords: [ Fine Tuning ] [ Low Rank Adaptation ] [ ODEs ]


Abstract: Fine-tuning pre-trained neural networks has become a cornerstone of transfer learning. However, the practical success of existing methods like low-rank adaptation (LoRA) lacks theoretical explanation. We introduce geometry-guided fine-tuning, a novel paradigm that models the fine-tuning process as the subtle movement of pre-trained weights on a low-dimensional manifold. Our approach formalizes this process through a learnable ordinary differential equation (ODE) - based framework that controls the search space of the weights, bridging existing methods with geometric principles. We empirically evaluate our method in the context of multi-task learning (MTL) fine-tuning of hierarchical vision transformers in computer vision. We propose a parameter-efficient ODE and evaluate it on the PASCAL-Context MTL benchmark. Our approach, dubbed DeLoRAoffers competitive performance across multiple dense prediction tasks, reducing trainable parameters by up to 4$\times$ compared to the best-performing baseline. This work advances both the theoretical understanding and practical application of fine-tuning, promoting efficient learning in resource-constrained environments.

Chat is not available.