Skip to yearly menu bar Skip to main content


Spotlight Poster

Curvature Clues: Decoding Deep Learning Privacy with Input Loss Curvature

Deepak Ravikumar · Efstathia Soufleri · Kaushik Roy

[ ] [ Project Page ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In this paper, we explore the properties of loss curvature with respect to input data in deep neural networks. Curvature of loss with respect to input (termed input loss curvature) is the trace of the Hessian of the loss with respect to the input. We investigate how input loss curvature varies between train and test sets, and its implications for train-test distinguishability. We develop a theoretical framework that derives an upper bound on the train-test distinguishability based on privacy and the size of the training set. This novel insight fuels the development of a new black box membership inference attack utilizing input loss curvature. We validate our theoretical findings through experiments in computer vision classification tasks, demonstrating that input loss curvature surpasses existing methods in membership inference effectiveness. Moreover, our analysis also sheds light on the potential of using subsets of training data as a defense mechanism against shadow model based membership inference attacks, revealing a previously unknown limitation of shadow model based methods. These findings not only advance our understanding of deep neural network behavior but also improve the ability to test privacy-preserving techniques in machine learning.

Live content is unavailable. Log in and register to view live content