Skip to yearly menu bar Skip to main content


Poster

You Only Look Around: Learning Illumination-Invariant Feature for Low-light Object Detection

Mingbo Hong · Shen Cheng · Haibin Huang · Haoqiang Fan · Shuaicheng Liu

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

In this paper, we introduce YOLA, a novel framework for object detection in low-light scenarios. Unlike previous works, we propose to tackle this challenging problem from the perspective of feature learning. Specifically, we propose to learn illumination-invariant features through the Lambertian image formation model. We observe that, under the Lambertian assumption, it is feasible to approximate illumination-invariant feature maps by exploiting the interrelationships between neighboring color channels and spatially adjacent pixels. By incorporating additional constraints, these relationships can be characterized in the form of convolutional kernels, which can be trained in a detection-driven manner within a network. Towards this end, we introduce a novel module dedicated to the extraction of illumination-invariant features from low-light images, which can be easily integrated into existing object detection frameworks. Our empirical findings reveal significant improvements in low-light object detection tasks, as well as promising results in both well-lit and over-lit scenarios.

Live content is unavailable. Log in and register to view live content