Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design

Uncertainty as a criterion for SOTIF evaluation of deep learning models in autonomous driving systems

Ho Suk

Keywords: [ autonomous driving ] [ Safety of the intended functionality ] [ Evaluation criteria ] [ uncertainty quantification ]


Abstract:

Ensuring the safety of deep learning models in autonomous driving systems is crucial. In compliance with the automotive safety standard ISO 21448, we propose uncertainty as a new complementary evaluation criterion to ensure the safety of the intended functionality (SOTIF) of deep learning-based systems. To evaluate and improve the trajectory prediction function of autonomous driving systems, we utilize epistemic uncertainty, quantified by a single forward pass model with consideration for constraints on resources and response time, as a criterion. Experimental results with data collected from the CARLA simulator demonstrate that uncertainty criterion can detect functional insuffiencies in unknown driving scenarios which potentially hazardous, and eventually induce extra learning.

Chat is not available.