Oral
in
Workshop: International Workshop on Federated Foundation Models in Conjunction with NeurIPS 2024 (FL@FM-NeurIPS'24)
The Future of Large Language Model Pre-training is Federated
Lorenzo Sani · Alexandru-Andrei Iacob · Zeyu Cao · Bill Marino · Yan Gao · Tomas Paulik · Wanru Zhao · William Shen · Preslav Aleksandrov · Xinchi Qiu · Nicholas Lane
Generative pre-trained large language models (LLMs) have demonstrated impressive performance over a wide range of tasks, thanks to the unprecedented amount of data they have been trained on.As established scaling laws indicate, LLMs' future performance improvement depends on the amount of computing and data sources they can leverage for pre-training.Federated learning (FL) has the potential to unleash the majority of the planet's data and computational resources, which are underutilized by the data-center-focused training methodology of current LLM practice.Our work presents a robust, flexible, reproducible FL approach that enables large-scale collaboration across institutions to train LLMs.We propose a scalable deployment system called Photon to enable the investigation and development of this new training paradigm for LLM pre-training.We show that Photon can be used by organizations interested in collaborating with their private data sources and computational resources for pre-training LLMs with billions of parameters.This paradigm would mobilize more computational and data resources while matching or potentially exceeding centralized performance.We further show the effectiveness of the federated training scales with model size and present our approach for training a billion-scale federated LLM using limited resources.Furthermore, we demonstrate that LLM training is highly resilient to the classical challenges of federated statistical and hardware heterogeneity.Finally, we show that convergence is robust to partial participation, opening the avenue for compute-efficient collaborative training.Photon will help data-rich actors to become the protagonists of LLMs pre-training instead of leaving the stage to the compute-rich alone.