Skip to yearly menu bar Skip to main content


Poster

A Global Depth-Range-Free Multi-View Stereo Transformer Network with Pose Embedding

Yitong Dong · Yijin Li · Zhaoyang Huang · Weikang Bian · Jingbo Liu · Hujun Bao · Zhaopeng Cui · Hongsheng Li · Guofeng Zhang

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

In this paper, we propose a novel multi-view stereo (MVS) framework that gets rid of the depth range prior. Unlike recent prior-free MVS methods that work in a pair-wise manner, our method simultaneously considers all the source images. Specifically, we introduce a Multi-view Disparity Attention (MDA) module to aggregate long-range context information within and across multi-view images. Considering the asymmetry of the epipolar disparity flow, the key to our method lies in accurately modeling multi-view motion constraints. We integrate pose embedding to encapsulate information such as multi-view camera poses, providing implicit motion constraints for multi-view motion feature fusion dominated by attention. Additionally, due to significant differences in the observation quality of the same pixel in the reference frame across multiple source frames, we construct corresponding hidden states for each source image. We explicitly estimate the quality of the current pixel corresponding to sampled points on the epipolar line of the source image and dynamically update hidden states through the uncertainty estimation module. Extensive results on the DTU dataset and Tanks\&Temple benchmark demonstrate the effectiveness of our method.

Live content is unavailable. Log in and register to view live content