Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models

Scaling Smart: Accelerating Large Language Model Pre-Training with Small Model Initialization

Mohammad Samragh · Iman Mirzadeh · Keivan Alizadeh-Vahid · Fartash Faghri · Minsik Cho · Moin Nabi · Devang Naik · Mehrdad Farajtabar

Keywords: [ Efficient Training ]


Abstract:

The pre-training phase of language models often begins with randomly initialized parameters. With the current trends in scaling models, training their large number of parameters can be extremely slow and costly. In contrast, small language models are less expensive to train, but they often cannot achieve the accuracy of large models. In this paper, we explore an intriguing idea to connect these two different regimes: Can we develop a method to initialize large language models using smaller pre-trained models? Will such initialization bring any benefits in terms of training time and final accuracy? In this paper, we introduce HyperCloning, a method that can expand the parameters of a pre-trained language model to those of a larger model with increased hidden dimensions. Our method ensures that the larger model retains the functionality of the smaller model. As a result, the larger model already inherits the predictive power and accuracy of the smaller model before the training starts. We demonstrate that training such an initialized model results in significant savings in terms of GPU hours required for pre-training large language models. Implementation of HyperCloning is available at https://github.com/apple/ml-hypercloning/tree/main.

Chat is not available.