Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability

MPLoRA: Orthogonal Multi-Path Low-Rank Adaptation for Parameter Efficient Fine-Tuning

Junhan Shi · Fulin Wang · Qing Li · Yong Jiang


Abstract:

Parameter-efficient fine-tuning (PEFT) has become crucial for adapting large language models to specific tasks, with Low-Rank Adaptation (LoRA) emerging as a prominent method. However, capturing diverse representations within LoRA's limited parameter space remains challenging. We propose Multi-Path LoRA (MPLoRA), a novel approach that decomposes the adaptation matrix into multiple smaller matrices with orthogonal constraints. MPLoRA encourages diverse representations and improves adaptation capability without increasing parameter count. Experiments on various tasks demonstrate that MPLoRA outperforms LoRA and other baselines, with notable improvements on datasets with limited samples. Our analysis reveals that both the multi-path structure and orthogonal constraints contribute significantly to MPLoRA's effectiveness. These findings highlight MPLoRA's potential for enhancing LLM performance and generalization, especially in resource-constrained scenarios, offering new insights into parameter-efficient fine-tuning.

Chat is not available.