Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

Multimodal Federated Learning with Model Personalization

Ratun Rahman · Dinh C.Nguyen


Abstract:

Federated learning (FL) has been widely studied for enabling privacy-preserving machine learning (ML) model training. Most existing FL frameworks focus on unimodal data, where clients train on the same type of data, such as images or time series. However, many real-world applications naturally involve multimodal data from diverse sources. While multimodal FL has recently been proposed, it still faces challenges in managing data heterogeneity across diverse clients. This paper proposes a novel multimodal meta-FL framework termed mmFL that orchestrates multimodal learning and personalized learning. Our approach can enable the federated training of local ML models across data modality clusters while addressing the data heterogeneity across clients based on a meta-learning-based solution. Extensive simulation results show that our approach brings a significant improvement in the training performance (up to 7.18\% in accuracy) compared with state-of-the-art algorithms.

Chat is not available.