Skip to yearly menu bar Skip to main content


Poster
in
Workshop: AIM-FM: Advancements In Medical Foundation Models: Explainability, Robustness, Security, and Beyond

PICASO: Secure Aggregation for Federated Learning with Minimal Synchronization

Harish Karthikeyan · Antigoni Polychroniadou


Abstract: Preventing private data leakage is crucial in federated learning. Existing secure aggregation (SA) protocols, which are the core protocols for privacy-preserving federated learning, require clients to synchronize at multiple points, meaning they must wait for other clients to send their messages before proceeding. This synchronization ensures that inputs can be aggregated without compromising privacy, while also accounting for client dropouts and message delays.This work presents $\mathsf{PICASO}$, abbreviated from Per Iteration Client At most Synchronizes Once, a novel SA protocol minimizing synchronization overhead in privacy preserving federated learning, aligning its communication pattern more closely with that of non-private federated learning. $\mathsf{PICASO}$ outperforms previous works like SecAgg, SecAgg+, MicroSecAgg, and Flamingo with server runtime under 1 second for large clients. $\mathsf{PICASO}$ demonstrates viability by training various models on different datasets.We also detail extensions to $\mathsf{PICASO}$ to achieve various improvements over state-of-the-art algorithms in two key areas - detecting and removing malicious clients, and secure aggregation for heterogeneous datasets. Overall, PICASO presents an efficient, secure, and flexible federated learning solution minimizing synchronization needs.

Chat is not available.