Skip to yearly menu bar Skip to main content


Poster

FedAvP: Augment Local Data via Shared Policy in Federated Learning

Minui Hong · Junhyeog Yun · Insu Jeon · Gunhee Kim

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Federated Learning (FL) allows multiple clients to collaboratively train models without directly sharing their private data. While various data augmentation techniques have been actively studied in the FL environment, most of these methods share input-level or feature-level data information over communication, posing potential privacy leakage. In response to this challenge, we introduce a federated data augmentation algorithm named FedAvP that shares only the augmentation policies, not the data-related information. For data security and efficient policy search, we interpret the policy loss as a meta update loss in standard FL algorithms and utilize the first-order gradient information to further enhance privacy and reduce communication costs. Moreover, we propose a meta-learning method to search for adaptive personalized policies tailored to heterogeneous clients. Our approach outperforms existing best performing augmentation policy search methods and federated data augmentation methods, in the benchmarks for heterogeneous FL.

Live content is unavailable. Log in and register to view live content