Poster
Vista: A Generalizable Driving World Model with High Fidelity and Versatile Controllability
Shenyuan Gao · Jiazhi Yang · Li Chen · Kashyap Chitta · Yihang Qiu · Andreas Geiger · Jun Zhang · Hongyang Li
World models can foresee the outcomes of different actions, which is of paramount importance for autonomous driving. Nevertheless, existing driving world models still fall short in generalization to unseen environments, prediction fidelity of critical details, and action controllability for flexible applications. In this paper, we present Vista, a generalizable driving world model with high fidelity and versatile controllability. Based on a systematic diagnosis of existing methods, we introduce several key ingredients to address these limitations. To accurately predict real-world dynamics at high resolution, we propose two novel losses to promote the learning of moving instances and structural information. We also devise an effective latent replacement approach to inject historical frames as priors for coherent long-term rollouts. For action controllability, we incorporate a versatile set of controls from high-level intentions (command, goal point) to low-level maneuvers (trajectory, angle, and speed) through an efficient learning strategy. After large-scale training, the abilities of Vista can seamlessly generalize to diverse scenarios in a zero-shot manner. Extensive experiments on multiple datasets show that Vista outperforms the most advanced general-purpose video generator in over 70% of comparisons and surpasses the best-performing driving world model by 55% in FID and 27% in FVD. Moreover, for the first time, we establish a generalizable reward function which uses for real-world driving action evaluation. Our code and model will be made publicly available. Videos can be found at this anonymous page.
Live content is unavailable. Log in and register to view live content