Poster
OpenCDA-Loop: A Closed-loop Benchmarking Platform for End-to-end Evaluation of Cooperative Perception
Chia-Ju Chen · Runsheng Xu · Wei Shao · Junshan Zhang · Zhengzhong Tu
Vehicle-to-vehicle (V2V) cooperative perception systems hold immense promise for surpassing the limitations of single-agent lidar-based frameworks in autonomous driving. While existing benchmarks have primarily focused on object detection accuracy, a critical gap remains in understanding how the upstream perception performance impacts the system-level behaviors---the ultimate goal of driving safety and efficiency. In this work, we address the crucial question of how the detection accuracy of cooperative detection models natively influences the downstream behavioral planning decisions in an end-to-end cooperative driving simulator. To achieve this, we introduce a novel simulation framework, \textbf{OpenCDA-Loop}, that integrates the OpenCDA cooperative driving simulator with the OpenCOOD cooperative perception toolkit. This feature bundle enables the holistic evaluation of perception models by running any 3D detection models inside OpenCDA in a real-time, online fashion. This enables a closed-loop simulation that directly assesses the impact of perception capabilities on safety-centric planning performance. To challenge and advance the state-of-the-art in V2V perception, we further introduce the \textbf{OPV2V-Safety} dataset, consisting of twelve challenging and pre-crash open scenarios designed following the National Highway Traffic Safety Administration (NHTSA) reports. Our findings reveal that OPV2V-Safety indeed challenges the prior state-of-the-art V2V detection models, while our safety benchmark yielded new insights on evaluating perception models as compared to the results on prior standard benchmarks. We envision that our end-to-end, closed-loop benchmarking platform will drive the community to rethink how perception models are being evaluated at the system level for the future development of safe and efficient autonomous systems. The code and benchmark will be made publicly available.
Live content is unavailable. Log in and register to view live content