Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning and Compression

Bridging the Gap between Diffusion Models and Universal Quantization for Image Compression

Lucas Relic · Roberto Azevedo · Yang Zhang · Markus Gross · Christopher Schroers


Abstract:

By leveraging the similarities between quantization error and additive noise, diffusion-based image compression codecs can be built by using a diffusion model to “denoise” the artifacts introduced by quantization. However, we identify three gaps in this approach which result in the quantized data falling out of distribution of the diffusion model: a gap in noise level, noise type, and a gap caused by discretization. To address these issues, we propose a novel quantization-based forward diffusion process that is theoretically founded and bridges all three aforementioned gaps. This is achieved through universal quantization with a carefully tailored quantization schedule, as well as diffusion model trained for uniform noise. Compared to previous work, our proposed architecture produces consistently realistic and detailed results, even at extremely low bitrates, while maintaining strong faithfulness to the original images.

Chat is not available.