Poster
Expanding Sparse Tuning for Low Memory Usage
Shufan Shen · Junshu Sun · Xiangyang Ji · Qingming Huang · Shuhui Wang
Parameter-efficient fine-tuning (PEFT) is an effective method for adapting pre-trained vision models to downstream tasks by tuning a small subset of parameters. Among PEFT methods, sparse tuning achieves superior performance by only adjusting the weights most relevant to downstream tasks, rather than densely tuning the whole weight matrix. However, this performance improvement has been accompanied by increases in memory usage. These increases stem from two factors: the storage of the whole weight matrix as learnable parameters in the optimizer and the additional storage of tunable weight indexes. In this paper, we propose a method named SNELL~(Sparse tuning with kerNELized LoRA) to enable sparse tuning with low memory usage. To achieve low memory usage, SNELL decomposes the tunable matrix for sparsification into two learnable low-rank matrices, saving from the costly storage of the original whole matrix. Furthermore, a competition-based sparsification mechanism is proposed to avoid the storage of tunable weight indexes. To maintain the effectiveness of sparse tuning with low-rank matrices, we extend the low-rank decomposition from a kernel perspective. Specifically, we apply nonlinear kernel functions to the whole-matrix merging and gain an increase in the rank of the merged matrix. Employing higher ranks enhances the ability of SNELL to adapt the pre-trained models to downstream tasks. Extensive experiments on multiple downstream tasks show that SNELL achieves state-of-the-art performance with low memory usage, extending effective PEFT with sparse tuning to large-scale models. Codes are included in the supplement and will be released on GitHub.
Live content is unavailable. Log in and register to view live content