[On-Demand] Keynote
in
Workshop: Multi-Agent Security: Security as Key to AI Safety
Towards AI-based auditing of privacy risks in privacy-enhancing technologies
Ana-Maria Cretu
The large-scale collection and availability of data is changing how we do science and make decisions. We are witnessing a huge demand to share data, especially in the medical, public and financial sectors. Large-scale data is also at the core of recent progress of large language models. A key question is how to share data without putting people's privacy at risk. It turns out that this is quite hard, as people can be easily re-identified based on a few pieces of information. Can we use AI to design more powerful attacks and, in this way, audit the privacy offered by different systems? We envision two directions: first, given some attack can we improve its performance using AI? Finding the best possible attacks or stronger attacks in general means that we are getting tighter estimates of the risk. This means that we are less likely to put a dataset or a set of aggregates out there when they are not safe. The second direction relates to discovering new attacks: can we develop tools to discover new attacks or automate the search for vulnerabilities? In this keynote I am going to show you two example of using AI for automated auditing, in the database and query release settings.