Poster
in
Workshop: NeurIPS 2024 Workshop: Machine Learning and the Physical Sciences
DeepUQ: A Systematic Comparison of Aleatoric Uncertainties from Deep Learning Methods
Becky Nevin · Brian Nord · Aleksandra Ciprijanovic
Abstract:
Assessing the quality of aleatoric uncertainty estimates from uncertainty quantification (UQ) deep learning methods is important in scientific contexts, where uncertainty is physically meaningful and important to exactly characterize and interpret.We present a systematic comparison of aleatoric uncertainty measured by two UQ techniques, Deep Ensembles (DE) and Deep Evidential Regression (DER).Our method focuses on both zero-dimensional (0D) and two-dimensional (2D) data, to explore how the UQ methods function for different data dimensionalities.We investigate uncertainty injected on the input and output variables and include a method to propagate uncertainty in the case of input uncertainty so that we can compare the predicted aleatoric uncertainty to the known values.We experiment with three levels of noise.The aleatoric uncertainty predicted across all models and experiments scales with the injected noise level.However, the predicted uncertainty is miscalibrated to $\rm{std}(\sigma_{\rm al})$ with the true uncertainty for half of the DE experiments and almost all of the DER experiments. The predicted uncertainty is the least accurate for both UQ methods for the 2D input uncertainty experiment and the high-noise level.This motivates future work on post-facto calibration.
Chat is not available.