Contributed Talks
in
Workshop: Optimization for ML Workshop
Talk 1: *On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent* and Talk 2: *The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization*
Devansh Gupta · Matan Schliserman
Talk 1: On the Inherent Privacy of Two Point Zeroth Order Projected Gradient Descent, Devansh Gupta
Abstract: Differentially private zeroth-order optimization methods have recently gained popularity in private fine tuning of machine learning models due to their favorable empirical performance and reduced memory requirements. Current approaches for privatizing zeroth-order methods rely on adding Gaussian noise to the estimated zeroth-order gradients. However, because the search direction in these methods is inherently random, researchers including Tang et al. and Zhang et al. have raised an important fundamental question: is the inherent noise in zeroth-order estimators sufficient to ensure the overall differential privacy of the algorithm? This work settles this fundamental question for a class of oracle-based optimization algorithms where the oracle returns zeroth-order gradient estimates. In particular, we show that for a fixed initialization, there exist strongly convex objective functions such that running Projected Zeroth-Order Gradient Descent (ZO-GD) is not differentially private. Moreover, we show that, even with random initialization, the privacy loss of ZO-GD increases superlinearly with the number of iterations when minimizing convex objective functions.
Talk 2: The Dimension Strikes Back with Gradients: Generalization of Gradient Methods in Stochastic Convex Optimization, Matan Schliserman
Abstract: We study the generalization performance of gradient methods in the fundamental stochastic convex optimization setting, focusing on its dimension dependence. First, for full-batch gradient descent (GD) we give a construction of a learning problem in dimension d = O(n^2), where the canonical version of GD (tuned for optimal performance on the empirical risk) trained with n training examples converges, with constant probability, to an approximate empirical risk minimizer with Omega(1) population excess risk. Our bound translates to a lower bound of Omega(d^(1/2)) on the number of training examples required for standard GD to reach a non-trivial test error, answering an open question raised by Feldman (2016) and Amir, Koren and Livni (2021) and showing that a non-trivial dimension dependence is unavoidable. Furthermore, for standard one-pass stochastic gradient descent (SGD), we show that an application of the same construction technique provides a similar Omega(d^(1/2))$ lower bound for the sample complexity of SGD to reach a non-trivial empirical error, despite achieving optimal test performance. This again provides for an exponential improvement in the dimension dependence compared to previous work (Koren et al., 2022), resolving an open question left therein.