Skip to yearly menu bar Skip to main content


Oral
in
Workshop: International Workshop on Federated Foundation Models in Conjunction with NeurIPS 2024 (FL@FM-NeurIPS'24)

Hot Pluggable Federated Learning

Lei SHEN · Zhenheng Tang · Lijun Wu · Yonggang Zhang · Xiaowen Chu · Tao Qin · Bo Han


Abstract:

Personalized federated learning (PFL) achieves high performance by assuming clients only meet test data locally, which is not held in many generic federated learning (GFL) scenarios. In this work, we show that PMs can be used to enhance GFL. However, storing and selecting whole models requires impractical computation and communication costs. Inspired by model components that attempt to edit a sub-model for specific purposes, we design an efficient and effective framework named Hot-Pluggable Federated Learning (HPFL). Specifically, clients individually train personalized plug-in modules based on a shared backbone, and upload them with a plug-in marker on the server modular store. In inference stage, an accurate selection algorithm allows clients to identify and retrieve suitable plug-in modules from the modular store to enhance their generalization performance on the target data distribution. Furthermore, we provide differential privacy protection during the selection with theoretical guarantee. Our comprehensive experiments and ablation studies demonstrate that HPFL significantly outperforms state-of-the-art GFL and PFL algorithms. Additionally, we empirically show HPFL's remarkable potential to resolve other practical FL problems such as continual federated learning and discuss its possible applications in one-shot FL, anarchic FL, and FL plug-in market. Our work is the first attempt towards improving GFL performance through a selecting mechanism with personalized plug-ins.

Chat is not available.