Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Safe Generative AI

Has My System Prompt Been Used? Large Language Model Prompt Membership Inference

Roman Levin · Valeriia Cherepanova · Abhimanyu Hans · Avi Schwarzschild · Tom Goldstein


Abstract:

Prompt engineering has emerged as a powerful technique for optimizing large language models (LLMs) for specific applications, enabling faster prototyping and improved performance, and giving rise to the interest of the community in protecting proprietary system prompts. In this work, we explore a novel perspective on prompt privacy through the lens of membership inference. We develop Prompt Detective, a statistical method to reliably determine whether a given system prompt was used by a third-party language model. Our approach relies on a statistical test comparing the distributions of two groups of generations corresponding to different system prompts. Through extensive experiments with a variety of language models, we demonstrate the effectiveness of Prompt Detective in both standard and challenging scenarios, including black-box settings. Our work reveals that even minor changes in system prompts manifest in distinct response distributions, enabling us to verify prompt usage with statistical significance.

Chat is not available.