Poster
in
Workshop: Mathematics of Modern Machine Learning (M3L)
A Theoretical Framework for Federated Domain Generalization with Gradient Alignment
Mahdiyar Molahasani · Milad Soltany · Farhad Pourpanah · Michael Greenspan · Ali Etemad
Keywords: [ Domain generalization ] [ Gradient alignment ] [ Federated learning ]
Gradient alignment has shown empirical success in federated domain generalization, yet a theoretical foundation for this approach remains unexplored. To address this gap, we provide a theoretical framework linking domain shift and gradient alignment in this paper. We begin by modeling the similarity between domains through the mutual information of their data. We then show that as the domain shift between clients in a federated system increases, the covariance between their respective gradients decreases. This link is initially established for federated supervised learning and subsequently extended to federated unsupervised learning, showing the consistency of our findings even in a self-supervised setup. Our work can further aid the development of robust models by providing an understanding of how gradient alignment affects learning dynamics and domain generalization.