Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability

SVFT: Parameter-Efficient Fine-Tuning with Singular Vectors

Vijay Chandra Lingam · Atula Neerkaje · Aditya Vavre · Aneesh Shetty · Gautham Krishna Gudur · Joydeep Ghosh · Alex Dimakis · Eunsol Choi · Aleksandar Bojchevski · Sujay Sanghavi


Abstract: Popular parameter-efficient fine-tuning (PEFT) methods, such as LoRA and its variants, freeze pre-trained model weights $\(\mathbf{W}\)$ and inject learnable matrices $\(\mathbf{\Delta W}\)$. These $\(\mathbf{\Delta W}\)$ matrices are structured for efficient parameterization, often using techniques like low-rank approximations or scaling vectors. However, these methods typically show a performance gap compared to full fine-tuning. Although recent PEFT methods have narrowed this gap, they do so at the cost of additional learnable parameters. We propose SVFT, which enables a trade-off between the number of trainable parameters and model expressivity by allowing a flexible number of off-diagonal interactions between singular vectors in $\(\mathbf{\Delta W}\)$, distinguishing it from previous SVD-based methods. This approach provides fine-grained control over expressivity through the number of coefficients. Extensive experiments on language and vision benchmarks demonstrate that SVFT recovers up to $\textbf{96}$\% of full fine-tuning performance while training only $\textbf{0.006 to 0.25}$\% of parameters, outperforming existing methods that recover only up to $\textbf{85}$\% performance using $\textbf{0.03 to 0.8}$\% of the trainable parameter budget.

Chat is not available.