Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Optimization for ML Workshop

On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent

Devansh Gupta · Meisam Razaviyayn · Vatsal Sharan


Abstract:

Differentially private zeroth-order optimization methods have recently gained popularity in private fine tuning of machine learning models due to their favorable empirical performance and reduced memory requirements. Current approaches for privatizing zeroth-order methods rely on adding Gaussian noise to the estimated zeroth-order gradients. However, because the search direction in these methods is inherently random, researchers including Tang et al. and Zhang et al. have raised an important fundamental question: is the inherent noise in zeroth-order estimators sufficient to ensure the overall differential privacy of the algorithm? This work settles this fundamental question for a class of oracle-based optimization algorithms where the oracle returns zeroth-order gradient estimates. In particular, we show that for a fixed initialization, there exist strongly convex objective functions such that running Projected Zeroth-Order Gradient Descent (ZO-GD) is not differentially private. Moreover, we show that, even with random initialization, the privacy loss of ZO-GD increases superlinearly with the number of iterations when minimizing convex objective functions.

Chat is not available.