Skip to yearly menu bar Skip to main content


Poster

Probabilistic Federated Prompt-Tuning in Data Imbalance Settings

Pei-Yau Weng · Minh Hoang · Lam Nguyen · My T. Thai · Lily Weng · Nghia Hoang

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Fine-tuning pre-trained models is a popular approach in machine learning for solving complex tasks with moderate data. However, fine-tuning the entire pre-trained model is ineffective in federated data scenarios where local data distributions are diversely skewed. To address this, we explore integrating federated learning with a more effective prompt-tuning method, optimizing for a small set of input prefixes to reprogram the pre-trained model's behavior. Our approach transforms federated learning into a distributed set modeling task, aggregating diverse sets of prompts to globally fine-tune the pre-trained model. We benchmark various baselines based on direct adaptations of existing federated model aggregation techniques and introduce a new probabilistic prompt aggregation method that substantially outperforms these baselines. Our reported results on a variety of computer vision datasets confirm that the proposed method is most effective to combat extreme data heterogeneity in federated learning.

Live content is unavailable. Log in and register to view live content