Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Audio Imagination: NeurIPS 2024 Workshop AI-Driven Speech, Music, and Sound Generation

Improving Source Extraction with Diffusion and Consistency Models

Tornike Karchkhadze · Mohammad Rasool Izadi · Shuo Zhang

[ ] [ Project Page ]
Sat 14 Dec 2 p.m. PST — 2:15 p.m. PST

Abstract:

In this work, we integrate a score-matching diffusion model into a standard deterministic architecture for time-domain musical source extraction. To address the typically slow iterative sampling process of diffusion models, we apply consistency distillation and reduce the sampling process to a single step, achieving performance comparable to that of diffusion models, and with two or more steps, even surpassing them. Trained on the Slakh2100 dataset for four instruments (bass, drums, guitar, and piano), our model shows significant improvements across objective metrics compared to baseline methods. Sound examples are available at https://consistency-separation.github.io/.

Chat is not available.