Poster
Conditional Controllable Image Fusion
Xingxin Xu · Bing Cao · Pengfei Zhu · Qilong Wang · Qinghua Hu
Image fusion aims to integrate complementary information from multiple input images acquired through various sensors or optical devices to synthesize a new image containing richer information. Existing methods usually employ distinct constraint designs tailored to specific scenes, forming fixed fusion paradigms. However, such data-driven fusion methods are hardly applicable to all scenarios, especially in rapidly changing environments. To address this issue, we first propose a conditional controllable fusion (CCF) framework for general image fusion tasks without specific training. Due to the dynamic differences of different samples, CCF employs specific fusion constraints for each individual in practice. Given the denoising diffusion model's tremendous reconstruction capacity, we inject the specific constraints into the pre-trained DDPM as adaptive fusion conditions. The appropriate conditions are dynamically selected to ensure the fusion process remains responsive to the specific requirements in the reverse diffusion stage. Thus, CCF enables conditionally calibrating the fused images step by step. Extensive experiments validate our effectiveness in general fusion tasks across diverse scenarios against the competing methods without additional training.
Live content is unavailable. Log in and register to view live content