Poster
Attack-Aware Noise Calibration for Differential Privacy
Bogdan Kulynych · Juan Gomez · Georgios Kaissis · Flavio Calmon · Carmela Troncoso
[
Abstract
]
Thu 12 Dec 4:30 p.m. PST
— 7:30 p.m. PST
Abstract:
Differential privacy (DP) is a widely used approach for mitigating privacy risks when training machine learning models on sensitive data. DP mechanisms add noise during training to limit the risk of information leakage. The scale of the added noise is critical, as it determines the trade-off between privacy and utility. The standard practice is to select the noise scale in terms of a _privacy budget parameter_ $\varepsilon$. This parameter is in turn interpreted in terms of operational _attack risk_, such as accuracy, sensitivity or specificity of inference attacks against the training data. We demonstrate that this two-step procedure of first calibrating the noise scale to a privacy budget $\varepsilon$, and then translating $\varepsilon$ to attack risk leads to overly conservative risk assessments and unnecessarily low utility. We propose methods to directly calibrate the noise scale to a desired attack risk level, bypassing the intermediate step of choosing $\varepsilon$. For a target attack risk, our approach significantly decreases noise scale, leading to increased utility at the same level of privacy. We empirically demonstrate that calibrating noise to attack sensitivity/specificity, rather than $\varepsilon$, when training privacy-preserving ML models substantially improves model accuracy fot the same risk level. Our work provides a principled and practical way to improve the utility of privacy-preserving ML without compromising on privacy.
Live content is unavailable. Log in and register to view live content