Skip to yearly menu bar Skip to main content


Poster

Event-3DGS: Event-based 3D Reconstruction Using 3D Gaussian Splatting

Haiqian Han · Jianing Li · Henglu Wei · Xiangyang Ji

[ ] [ Project Page ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Event cameras, offering high temporal resolution and high dynamic range, have brought a new perspective to addressing 3D reconstruction challenges in fast-motion and low-light scenarios. Most methods use the Neural Radiance Field (NeRF) for event-based photorealistic 3D reconstruction. However, these NeRF methods suffer from time-consuming training and inference, as well as limited scene-editing capabilities of implicit representations. To address these problems, we propose Event-3DGS, the first event-based reconstruction using 3D Gaussian splatting (3DGS) for synthesizing novel views freely from event streams. Technically, we first propose an event-based 3DGS framework that directly processes event data and reconstructs 3D scenes by simultaneously optimizing scenario and sensor parameters. Then, we present a high-pass filter-based photovoltage estimation module, which effectively reduces noise in event data to improve the robustness of our Event-3DGS in real-world scenarios. Finally, we design an event-based 3D reconstruction loss to optimize the parameters of our Event-3DGS for better reconstruction quality. The results show that our Event-3DGS outperforms state-of-the-art methods in terms of reconstruction quality on both simulated and real-world datasets. We also verify that our Event-3DGS can perform robust 3D reconstruction even in real-world scenarios with extreme noise, fast motion, and low-light conditions. Our code can be available in the supplementary material.

Live content is unavailable. Log in and register to view live content