Keynote Talk
in
Workshop: 3rd Workshop on New Frontiers in Adversarial Machine Learning (AdvML-Frontiers)
Franziska Boenisch
As language models (LLMs) underpin various sensitive applications, preserving privacy of their training data is crucial for their trustworthy deployment. This talk will focus on the privacy of LLM adaptation data. We will see how easily sensitive data can leak from the adaptations, putting privacy in risk. We will then dive into designing protection methods, focusing on how we can obtain privacy guarantees for adaptation data, in particular for prompts. We will also compare private adaptations for open LLMs and their closed, proprietary counterparts across different axes, finding that private adaptations for open LLMs yield higher privacy, better performance, and lower costs. Finally, we will discuss how to monitor privacy of adapted LLMs through dedicated auditing. By identifying privacy risks of adapting LLMs, understanding how to mitigate them, and conducting thorough audits, we can ensure that LLMs can be employed for societal benefits without putting individual data at risk.