Poster
LiveScene: Language Embedding Interactive Radiance Fields for Physical Scene Control and Rendering
Delin Qu · Qizhi Chen · Pingrui Zhang · Xianqiang Gao · Bin Zhao · Zhigang Wang · Dong Wang · Xuelong Li
This paper aims to advance the progress of physical world interactive scene reconstruction by extending the interactive object reconstruction from single object level to complex scene level. To this end, we first construct one simulated and one real scene-level physical interaction dataset containing 28 scenes with multiple interactive objects per scene. Furthermore, to accurately model the interactive motions of multiple objects in complex scenes, we propose \textbf{\ours}, the first scene-level language-embedded interactive neural radiance field that efficiently reconstructs and controls multiple interactive objects in complex scenes. \ours introduces an efficient factorization that decomposes the interactive scene into multiple local deformable fields to separately reconstruct individual interactive objects, achieving the first accurate and independent control on multiple interactive objects in a complex scene. Moreover, we introduce an interaction-aware language embedding method that generates varying language embeddings to localize individual interactive objects under different interactive states, enabling arbitrary control of interactive objects using natural language. Finally, we evaluate \ours on the constructed datasets \textbf{\simdata} and \textbf{\realdata} with various simulated and real-world complex scenes. Extensive experiment results demonstrate that the proposed approach achieves SOTA novel view synthesis and language grounding performance, surpassing existing methods by +9.89, +1.30, and +1.99 in PSNR on CoNeRF Synthetic, \simdata #chanllenging, and \realdata #chanllenging datasets, and +65.12 of mIOU on \simdata, respectively. The code and dataset will be released upon the paper publication.
Live content is unavailable. Log in and register to view live content