Poster
in
Workshop: Red Teaming GenAI: What Can We Learn from Adversaries?
Semantic Membership Inference Attack against Large Language Models
Hamid Mozaffari · Virendra Marathe
Keywords: [ Large Language Models ] [ Membership Inference Attack ]
Membership Inference Attacks (MIAs) determine whether a specific data point was included in the training set of a target model. In this paper, we introduce the Semantic Membership Inference Attack (SMIA), a novel approach that enhances MIA performance by leveraging the semantic content of inputs and their perturbations. SMIA trains a neural network to analyze the target model’s behavior on perturbed inputs, effectively capturing variations in output probability distributions between members and non-members. We conduct comprehensive evaluations on the Pythia and GPT-Neo model families using the Wikipedia dataset. Our results show that SMIA significantly outperforms existing MIAs; for instance, SMIA achieves an AUC-ROC of 67.39\% on Pythia-12B, compared to 58.90\% by the second-best attack.
Video Link: https://drive.google.com/file/d/18m6eMbS7GgSOhbcNoOeQW3CIb-VXhsYj/view?usp=sharing