Skip to yearly menu bar Skip to main content


Spotlight Poster

DiffSF: Diffusion Models for Scene Flow Estimation

Yushan Zhang · Bastian Wandt · Maria Magnusson · Michael Felsberg

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Scene flow estimation is an essential ingredient for a variety of real-world applications, especially for autonomous agents, such as self-driving cars and robots. While recent scene flow estimation approaches achieve a reasonable accuracy, their applicability to real-world systems additionally benefits from a reliability measure. Aiming at improving accuracy while additionally providing an estimate for uncertainty, we propose DiffSF that combines transformer-based scene flow estimation with denoising diffusion models. In the diffusion process, the ground truth scene flow vector field is gradually perturbed by adding Gaussian noise. In the reverse process, starting from randomly sampled Gaussian noise, the scene flow vector field prediction is recovered by conditioning on a source and a target point cloud. We show that the diffusion process greatly increases the robustness of predictions compared to prior approaches resulting in state-of-the-art performance on standard scene flow estimation benchmarks. Moreover, by sampling multiple times with different initial states, the denoising process predicts multiple hypotheses, which enables measuring the output uncertainty, allowing our approach to detect a majority of the inaccurate predictions. The code can be found in the supplemental material and will be made publicly available upon acceptance.

Live content is unavailable. Log in and register to view live content