De-Anonymizing Text by Fingerprinting Language Generation
Zhen Sun, Roei Schuster, Vitaly Shmatikov
Spotlight presentation: Orals & Spotlights Track 03: Language/Audio Applications
on 2020-12-07T20:20:00-08:00 - 2020-12-07T20:30:00-08:00
on 2020-12-07T20:20:00-08:00 - 2020-12-07T20:30:00-08:00
Poster Session 1 (more posters)
on 2020-12-07T21:00:00-08:00 - 2020-12-07T23:00:00-08:00
GatherTown: Bioinformatics, healthcare, and social ( Town A0 - Spot D2 )
on 2020-12-07T21:00:00-08:00 - 2020-12-07T23:00:00-08:00
GatherTown: Bioinformatics, healthcare, and social ( Town A0 - Spot D2 )
Join GatherTown
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Only iff poster is crowded, join Zoom . Authors have to start the Zoom call from their Profile page / Presentation History.
Toggle Abstract Paper (in Proceedings / .pdf)
Abstract: Components of machine learning systems are not (yet) perceived as security hotspots. Secure coding practices, such as ensuring that no execution paths depend on confidential inputs, have not yet been adopted by ML developers. We initiate the study of code security of ML systems by investigating how nucleus sampling---a popular approach for generating text, used for applications such as auto-completion---unwittingly leaks texts typed by users. Our main result is that the series of nucleus sizes for many natural English word sequences is a unique fingerprint. We then show how an attacker can infer typed text by measuring these fingerprints via a suitable side channel (e.g., cache access times), explain how this attack could help de-anonymize anonymous texts, and discuss defenses.