Skip to yearly menu bar Skip to main content


Poster+Demo Session
in
Workshop: Audio Imagination: NeurIPS 2024 Workshop AI-Driven Speech, Music, and Sound Generation

SoundCTM: Uniting Score-based and Consistency Models for Text-to-Sound Generation

Koichi Saito · Dongjun Kim · Takashi Shibuya · Chieh-Hsin Lai · Zhi Zhong · Yuhta Takida · Yuki Mitsufuji

[ ] [ Project Page ]
Sat 14 Dec 4:15 p.m. PST — 5:30 p.m. PST

Abstract: Recent high-quality diffusion-based sound generation models can serve as valuable tools for sound content creators.However, despite producing high-quality sounds, these models often suffer from slow inference speeds. This drawback burdens creators, who typically refine their sounds through trial and error to align sounds with their artistic intentions. To address this issue, we introduce Sound Consistency Trajectory Models (SoundCTM). Our model enables flexible transitioning between high-quality $1$-step sound generation and superior sound quality through multi-step generation. This allows creators to initially control sounds with $1$-step samples before refining them through multi-step generation. We reframe original CTM's training framework and introduce a novel feature distance by utilizing the teacher's network for a distillation loss. Additionally, while distilling classifier-free guided trajectories, we train conditional and unconditional student models simultaneously and interpolate between these models during inference. SoundCTM achieves both promising $1$-step and multi-step real-time sound generation. Audio samples are available at https://anonymus-soundctm.github.io/soundctm/.

Chat is not available.