Skip to yearly menu bar Skip to main content


Spotlight Poster

Graph-based Uncertainty Metrics for Long-form Language Model Generations

Mingjian Jiang · Yangjun Ruan · Prasanna Sattigeri · Salim Roukos · Tatsunori Hashimoto

East Exhibit Hall A-C #4802
[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Recent advancements in Large Language Models (LLMs) have significantly improved text generation capabilities, but these systems are still known to hallucinate, and granular uncertainty estimation for long-form LLM generations remains challenging. In this work, we propose Graph Uncertainty -- which represents the relationship between LLM generations and claims within them as a bipartite graph and estimates the claim-level uncertainty with a family of graph centrality metrics. Under this view, existing uncertainty estimation methods based on the concept of self-consistency can be viewed as using degree centrality as an uncertainty measure, and we show that more sophisticated alternatives such as closeness centrality provide consistent gains at claim-level uncertainty estimation.Moreover, we present uncertainty-aware decoding techniques that leverage both the graph structure and uncertainty estimates to improve the factuality of LLM generations by preserving only the most reliable claims. Compared to existing methods, our graph-based uncertainty metrics lead to an average of 6.8% relative gains on AUPRC across various long-form generation settings, and our end-to-end system provides consistent 2-4% gains in factuality over existing decoding techniques while significantly improving the informativeness of generated responses.

Live content is unavailable. Log in and register to view live content