Skip to yearly menu bar Skip to main content


Poster

SAMPa: Sharpness-aware Minimization Parallelized

Wanyun Xie · Thomas Pethick · Volkan Cevher

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Sharpness-aware minimization (SAM) has been shown to improve the generalization of neural networks. However, each SAM update requires sequentially computing two gradients, effectively doubling the per-iteration cost compared to base optimizers like SGD. We propose a simple modification of SAM, termed SAMPa, which allows us to fully parallelize the two gradient computations. SAMPa achieves a twofold speedup of SAM under the assumption that communication costs between devices are negligible. Empirical results demonstrate that SAMPa incurs the least computational time compared to existing efficient variants of SAM. Additionally, our method consistently outperforms SAM across both vision and language tasks. Notably, SAMPa theoretically maintains convergence guarantees even for fixed perturbation sizes, which is established through a novel Lyapunov function. We in fact arrive at SAMPa by treating this convergence guarantee as a hard requirement—an approach we believe is promising for developing SAM-based methods in general.

Live content is unavailable. Log in and register to view live content