Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability

Early Exiting in Deep Neural Networks via Dirichlet-based Uncertainty Quantification

Feng Xia · Jake Snell · Tom Griffiths


Abstract:

Deep neural networks are renowned for their accuracy across a spectrum of machine learning tasks but often suffer from prolonged inference time due to their depth. Early exiting strategies have been proposed to mitigate this by allowing predictions to output at intermediate layers. However, we observe that using total uncertainty as the exiting criterion does not consistently reflect true model uncertainty, causing traditional methods to prevent early exits for ambiguous data even when model uncertainty is low. To address this limitation, we propose a Dirichlet-based framework to directly quantify model uncertainty. Models trained with our approach demonstrate more balanced handling of both ambiguous and unambiguous data, enabling a higher proportion of ambiguous samples to exit early.

Chat is not available.