Poster
in
Affinity Event: Queer in AI
$\mathsf{OPA}$: One-shot Private Aggregation with Single Client Interaction and its Applications to Federated Learning
Harish Karthikeyan · Antigoni Polychroniadou
Keywords: [ privacy-preserving ] [ federated learning ] [ secure aggregation ]
[
Abstract
]
[ Project Page ]
Abstract:
This paper introduces $\mathsf{OPA}$ - abbreviated from One-shot Private Aggregation - a system for secure aggregation of data, across a large number of clients, where the clients speak \emph{once}, per iteration. Crucially, clients do not need to rely on any setup phase or to receive inputs from any other parties for their participation in the protocol. $\mathsf{OPA}$ is designed to bridge the gap between traditional federated learning where model updates are sent in the clear, without any additional client participation; and prior works on secure aggregation protocols that have focused on multi-round rituals, initiated by Bonawitz et al. (CCS'17), for a successful completion of the iteration. Our key cryptographic component is Distributed Key-Homomorphic Pseudorandom Functions, which we instantiate from both Learning with Rounding Assumption and Hidden Subgroup Membership Assumption in class groups of unknown order. We microbenchmark $\mathsf{OPA}$ with the state-of-the-art secure aggregation protocols. Our experiments show that the server-side computation is the fastest, at $<1 s$, even as the number of clients increases. Meanwhile, client performance is competitive with MicroSecAgg (PETS'24) while beating Flamingo(S$\&$P '23), SecAgg (CCS'17), and SecAgg+ (CCS'20). We also evaluate the performance of $\mathsf{OPA}$ for its intended purpose of federated learning by showing no loss in accuracy, across several datasets.
Live content is unavailable. Log in and register to view live content