Skip to yearly menu bar Skip to main content


Poster

Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment

Yiheng Li · Heyang Jiang · Akio Kodaira · Masayoshi TOMIZUKA · Kurt Keutzer · Chenfeng Xu

[ ] [ Project Page ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In this paper, we point out suboptimal noise-data mapping leads to slow training of diffusion models. During diffusion training, it diffuses each image across the entire noise space, resulting in a mixture of all images at every point in the noise layer. We emphasize that this random mixture of noise-data mapping complicates the optimization of the denoising function in diffusion models. Drawing inspiration from the immiscible phenomenon in physics, we propose immiscible diffusion, a simple yet effective method to improve the random mixture of noise-data mapping. In physics, miscibility can vary according to various intermolecular forces. Inspired by this concept, we propose an assignment-then-diffusion training strategy. Specifically, prior to diffusing the image data into noise, we assign diffusion target noise for the image data by minimizing the total image-noise pair distance in a mini-batch. The assignment functions analogously to external forces to separate the diffuse-able areas of images, thus mitigating the inherent difficulties in diffusion training. Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image while preserving the Gaussian distribution of noise. This ensures that each image is projected only to nearby noise. To address the high complexity of the assignment algorithm, we introduce a quantized-assignment strategy, which significantly reduces the computational overhead to a negligible level. Experiments demonstrate that our method can achieve up to 3x faster training for consistency models and DDIM on the CIFAR dataset, and up to 1.3x faster on CelebA datasets for consistency models. Besides, we conduct thorough analysis about the immiscible diffusion, which sheds lights on how it improves diffusion training speed while improving the fidelity.

Live content is unavailable. Log in and register to view live content