Skip to yearly menu bar Skip to main content


Poster

FedGMKD: An Efficient Prototype Federated Learning Framework through Knowledge Distillation and Differential Aggregation

Jianqiao Zhang · Caifeng Shan · Jungong Han

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In the realm of federated learning (FL), addressing the challenge of data heterogeneity across distributed clients is paramount for effective and equitable model training. We propose FedGMKD, a pioneering algorithm designed to navigate the complexities of diverse data landscapes. FedGMKD introduces 'Cluster Knowledge Fusion' (CKF), which integrates mixed Gaussian clustering with knowledge distillation to enable robust local model training without the need for public datasets and server-side generative models. Moreover, a new Differential Aggregation Technique (DAT) is introduced to tailor the aggregation process to the distinct feature distributions of each category, optimizing efficiency and performance. This strategic approach enhances convergence and significantly improves both personalized and global model performance in varied data environments. Our extensive experiments demonstrate that FedGMKD marks a crucial advancement in personalized federated learning, achieving state-of-the-art results in heterogeneous data scenarios.

Live content is unavailable. Log in and register to view live content