Skip to yearly menu bar Skip to main content


Poster

AudioMarkBench: Benchmarking Robustness of Audio Watermarking

Hongbin Liu · Moyang Guo · Zhengyuan Jiang · Lun Wang · Neil Gong

[ ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The increasing realism of synthetic speech, driven by advancements in text-to-speech models, raises ethical concerns regarding impersonation and disinformation. Audio watermarking offers a promising solution via embedding human-imperceptible watermarks into AI-generated audios. However, the robustness of audio watermarking against common/adversarial perturbations remains understudied. We present AudioMarkBench, the first systematic benchmark for evaluating the robustness of audio watermarking against watermark removal and watermark forgery. AudioMarkBench includes a new dataset created from Common-Voice across languages, biological sexes, and ages, 3 state-of-the-art watermarking methods, and 15 types of perturbations. We benchmark the robustness of these methods against the perturbations in no-box, black-box, and white-box settings. Our findings highlight the vulnerabilities of current watermarking techniques and emphasize the need for more robust and fair audio watermarking solutions. Our dataset and code are publicly available at https://github.com/moyangkuo/AudioMarkBench.

Live content is unavailable. Log in and register to view live content