Poster
in
Workshop: Third Workshop on Efficient Natural Language and Speech Processing (ENLSP-III): Towards the Future of Large Language Models and their Emerging Descendants
Efficient Stagewise Pretraining via Progressive Subnetworks
Abhishek Panigrahi · Nikunj Saunshi · Kaifeng Lyu · Sobhan Miryoosefi · Sashank Reddi · Satyen Kale · Sanjiv Kumar
Recent developments in language models have sparked interest in developing efficient pretraining methods. A recent and effective paradigm is to perform stagewise training, where the depth of the model is gradually increased over the course of training starting from a shallow network (e.g. gradual stacking (Reddi et al., 2023)). While this is appealing since it yields resource and wall-time savings, it has limitations, particularly the inability to assess and evaluate the full model performance during earlier stages, and degradation in model quality due to smaller capacity of models in the initial stages. In this work, we propose an alternative framework, progressive subnetwork training, that maintains the full model throughout training, but only trains subnetworks within the model in each step. We empirically focus on a simple instantiation of this framework - Random Path Training (RAPTR) - that only trains a sub-path of layers in each step, progressively increasing the path lengths in stages. We demonstrate that RAPTR achieves better pre-training loss for BERT and UL2 language models while requiring 20-33% fewer FLOPs compared to standard training, and is competitive or better than gradual stacking at similar FLOPs. Furthermore, RAPTR shows better downstream performance on UL2, improving multiple QA and SuperGLUE tasks by 1-5% compared to standard training and stacking. Finally, we provide theoretical basis of RAPTR for residual networks by characterizing their stability due to residual connections and layer norm.