Poster
Winner-Take-All Column Row Sampling for Memory Efficient Adaptation of Language Model
Zirui Liu · Guanchu Wang · Shaochen (Henry) Zhong · Zhaozhuo Xu · Daochen Zha · Ruixiang (Ryan) Tang · Zhimeng (Stephen) Jiang · Kaixiong Zhou · Vipin Chaudhary · Shuai Xu · Xia Hu
Great Hall & Hall B1+B2 (level 1) #1205
Abstract:
As the model size grows rapidly, fine-tuning the large pre-trained language model has become increasingly difficult due to its extensive memory usage. Previous works usually focus on reducing the number of trainable parameters in the network. While the model parameters do contribute to memory usage, the primary memory bottleneck during training arises from storing feature maps, also known as activations, as they are crucial for gradient calculation. Notably, machine learning models are typically trained using stochastic gradient descent.We argue that in stochastic optimization, models can handle noisy gradients as long as the gradient estimator is unbiased with reasonable variance.Following this motivation, we propose a new family of unbiased estimators called \sas, for matrix production with reduced variance, which only requires storing the sub-sampled activations for calculating the gradient.Our work provides both theoretical and experimental evidence that, in the context of tuning transformers, our proposed estimators exhibit lower variance compared to existing ones.By replacing the linear operation with our approximated one in transformers, we can achieve up to 2.7X peak memory reduction with almost no accuracy drop and enables up to $6.4\times$ larger batch size.Under the same hardware, \sas enables better down-streaming task performance by applying larger models and/or faster training speed with larger batch sizes.The code is available at https://anonymous.4open.science/r/WTACRS-A5C5/.
Chat is not available.