Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability
Navigating Parameter Space with Geodesic Interpolation: A New Approach to Efficient Fine-Tuning
Sophia Abraham
Fine-tuning large-scale pre-trained models presents inherent challenges related to computational complexity and resource inefficiency. In this paper, we introduce Geodesic Low-Rank Adaptation (GLRA), a novel conceptual framework designed to rethink how fine-tuning occurs in deep neural networks. Rather than relying on traditional methods of parameter updates that may fall prey to sharp minima and unstable convergence, GLRA utilizes geodesic paths to smooth transitions within the weight space. Combined with low-rank adaptation, GLRA seeks to minimize computational overhead while promoting flatter minima, potentially improving generalization and stability in fine-tuning. This paper focuses on exploring the theoretical implications of geodesic interpolation, hypothesizing that this method can provide new insights into efficient model adaptation. We demonstrate through mathematical reasoning how GLRA can enhance model stability by avoiding sharp transitions in the optimization landscape. While experimental validation is left as future work, the conceptual framework we introduce opens a pathway for research into the intersection of geometry and parameter-efficient learning, inviting further investigation into its potential.