Skip to yearly menu bar Skip to main content


Poster

Towards Flexible 3D Perception: Object-Centric Occupancy Completion Augments 3D Object Detection in Long Sequence

Chaoda Zheng · Feng Wang · Naiyan Wang · Shuguang Cui · Zhen Li

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

While 3D object bounding box (bbox) representations have been widely used in autonomous driving perception, they lack the ability to capture the intricate details of an object's inner geometry. Recently, occupancy has emerged as a promising alternative for 3D scene perception. However, constructing a high-resolution occupancy map remains infeasible for large scenes due to computational constraints. Recognizing that foreground objects are more critical than background elements but only occupy a small portion of the scene, we introduce object-centric occupancy as a supplement to object bboxes. This representation not only provides intricate details for detected objects but also allows for higher voxel resolutions in practical applications. We advance the development of object-centric occupancy perception from both data and algorithmic perspectives. On the data side, we construct the first object-centric occupancy dataset from scratch using an automated pipeline. From the algorithmic standpoint, we introduce a novel object-centric occupancy completion network equipped with an implicit shape decoder that manages dynamic-size occupancy generation. This network accurately predicts the complete object-centric occupancy volume for drifted object proposals by leveraging temporal information from long sequences. Our method demonstrates robust performance in completing object shapes under noisy detection and tracking conditions. Additionally, we show that our occupancy features significantly enhance the detection results of state-of-the-art 3D object detectors, especially for incomplete or distant objects in the Waymo Open Dataset.

Live content is unavailable. Log in and register to view live content