Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Responsibly Building Next Generation of Multimodal Foundation Models

Just rephrase it! Uncertainty estimation in closed-source language models via multiple rephrased queries

Adam Yang · CHEN CHEN · Konstantinos Pitas

Keywords: [ hallucinations ] [ uncertainty ] [ prompts ]


Abstract:

We explore estimating the uncertainty of closed-source LLMs via multiple rephrasings of an original base query. Specifically, we ask the model, multiple rephrased questions, and use the similarity of the answers as an estimate of uncertainty. We diverge from previous work in i) providing rules for rephrasing that are simple to memorize and use in practice ii) proposing a theoretical framework for why multiple rephrased queries obtain calibrated uncertainty estimates. Our method demonstrates significant improvements in the calibration of uncertainty estimates compared to the baseline and provides intuition as to how query strategies should be designed for optimal test calibration.

Chat is not available.