Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design

NODE-GAMLSS: Interpretable Uncertainty Modelling via Deep Distributional Regression

Ananyapam De · Anton Thielmann · Benjamin Säfken

Keywords: [ neural regression ] [ Additive Models ] [ uncertainty modelling ] [ distributional regression ]


Abstract:

We propose NODE-GAMLSS, a framework for scalable uncertainty modelling through deep distributional regression. NODE-GAMLSS is an interpretable attention based deep learning architecture which models the location, scale, and shape (LSS) dependent on the data instead of only the conditional mean enabling us to predict quantiles and interpret the feature effects. We perform a benchmark comparison based on simulated and real datasets with state-of-the-art interpretable distributional regression models, demonstrating the superior quantile estimation, accuracy and interpretability.

Chat is not available.