Skip to yearly menu bar Skip to main content


Poster

UniFL: Improve Stable Diffusion via Unified Feedback Learning

Jiacheng Zhang · Jie Wu · Yuxi Ren · Xin Xia · Huafeng Kuang · Pan Xie · Jiashi Li · Xuefeng Xiao · Weilin Huang · Shilei Wen · Lean Fu · Guanbin Li

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Diffusion models have revolutionized the field of image generation, leading to the proliferation of high-quality models and diverse downstream applications. However, despite these significant advancements, the current competitive solutions still suffer from several limitations, including inferior visual quality, a lack of aesthetic appeal, and inefficient inference, without a comprehensive solution in sight. To address these challenges, we present \textit{\textbf{UniFL}}, a unified framework that leverages feedback learning to enhance diffusion models comprehensively. UniFL stands out as a universal, effective, and generalizable solution applicable to various diffusion models, such as SD1.5 and SDXL.Notably, UniFL incorporates three key components: perceptual feedback learning, which enhances visual quality; decoupled feedback learning, which improves aesthetic appeal; and adversarial feedback learning, which accelerates inference.In-depth experiments and extensive user studies validate the superior performance of our proposed method in enhancing both the quality of generated models and their acceleration. For instance, UniFL surpasses ImageReward by \textbf{17\%} user preference in terms of generation quality and outperforms LCM and SDXL Turbo by \textbf{57\%} and \textbf{20\%} with 4-step inference.

Live content is unavailable. Log in and register to view live content