Skip to yearly menu bar Skip to main content


Oral
in
Workshop: Audio Imagination: NeurIPS 2024 Workshop AI-Driven Speech, Music, and Sound Generation

Improving Musical Accompaniment Co-creation via Diffusion Transformers

Javier Nistal · Marco Pasini · Stefan Lattner

[ ] [ Project Page ]
Sat 14 Dec 9:30 a.m. PST — 9:45 a.m. PST

Abstract:

Building upon Diff-A-Riff, a latent diffusion model for musical instrument accompaniment generation, we present a series of improvements targeting quality, diversity, inference speed, and text-driven control. First, we upgrade the underlying autoencoder to a stereo-capable model with superior fidelity and replace the latent U-Net with a Diffusion Transformer. Additionally, we refine text prompting by training a cross-modality predictive network to translate text-derived CLAP embeddings to audio-derived CLAP embeddings. Finally, we improve inference speed by training the latent model using a consistency framework, achieving competitive quality with fewer denoising steps. Our model is evaluated against the original Diff-A-Riff variant using objective metrics in ablation experiments, demonstrating promising advancements in all targeted areas: Sound examples are available at https://sonycslparis.github.io/improved_dar/.

Chat is not available.