Poster
in
Workshop: Adaptive Foundation Models: Evolving AI for Personalized and Efficient Learning
Enhancing Low-Light Imagery: A Fusion of Deep Learning and Diffusion Models for Superior Visibility
Yangfan He · Jianhui Wang · Sida Li · Haoyuan Li · TIANYU SHI
Low light enhancement aims to improve the lightness of dark images. Traditional low-light image enhancement methods often rely heavily on large labeled datasets, which limits their adaptability to diverse real-world conditions. These methods typically struggle with content reconstruction efficiency, particularly in handling intricate details in extremely low-light settings. In contrast, recent approaches leveraging deep learning models, while improving performance, face challenges in generalizing across different datasets and lighting conditions. To address the challenge, this paper presents an innovative approach to low-light image enhancement that leverages a refined UNet architecture, augmented with contrastive learning and informed by large language models (LLMs) for context-aware modifications. The framework incorporates depth map constraints and a novel Region-Focused Diffusion technique that allocates computational resources to essential image regions. To further enhance the model's generalization, we employ multi-task meta-learning techniques, which ensures consistent brightness across generated images. Comprehensive experiments, featuring both quantitative and qualitative analyses, validate the model's exceptional performance and generalization capabilities.