Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design
Uncertainty Quantification and Calibration for Audio-driven Disease Diagnosis
Shubham Kulkarni · Hideaki Watanabe · Fuminori Homma
Keywords: [ Audio classificatoin ] [ Calibration of ML models ] [ Disease Diagnosis ] [ uncertainty quantification ] [ AI for Healthcare ]
Deep learning excels in analyzing multi-modal signals for healthcare diagnostics but lacks the ability to quantify confidence in the predictions, which can lead to overconfident, erroneous diagnoses. In this work, we propose to predict model output independently and estimate the corresponding uncertainty. We present a unified audio-driven disease detection framework incorporating uncertainty quantification (UQ). This is achieved using a Dirichlet density approximation for model prediction and independent kernel distance learning in feature latent space for UQ. This approach requires minimum modifications to existing audio encoder architectures and is extremely parameter efficient compared to k-ensemble models. The uncertainty-aware model improves prediction reliability by producing confidence scores that closely match the accuracy values. Evaluations using the largest publicly available respiratory disease datasets demonstrate the advantage of the proposed framework in accuracy, training and inference time over ensemble and dropout methods.The proposed model improves speech and audio analysis for medical diagnosis by identifying and calibrating uncertainties, enabling better decision-making and risk assessment. This is shown by high uncertainty scores at low model accuracy. The proposed model contributes to speech technologies for healthcare by enhancing model transparency and reliability.