Skip to yearly menu bar Skip to main content


Poster

MVSDet: Multi-View Indoor 3D Object Detection via Efficient Plane Sweeps

Yating Xu · Chen Li · Gim Hee Lee

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

The key challenge of multi-view indoor 3D object detection is to infer accurate geometry information from images for precise 3D detection. Previous method relies on NeRF for geometry reasoning. However, the geometry extracted from NeRF is generally inaccurate, which leads to sub-optimal detection performance. In this paper, we propose MVSDet which utilizes plane sweep for geometry-aware 3D object detection. To circumvent the requirement for a large number of depth planes for accurate depth prediction, we design a probabilistic sampling and soft weighting mechanism to decide the placement of pixel features on the 3D volume. We select multiple locations that score top in the probability volume for each pixel and use their probability score to indicate the confidence. We further apply recent pixel-aligned Gaussian Splatting to regularize depth prediction and improve detection performance with little computation overhead. Extensive experiments on ScanNet and ARKitScenes datasets are conducted to show the superiority of our model. Our code will be available as open-source upon paper acceptance.

Live content is unavailable. Log in and register to view live content