Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

Mitigating Hallucination in Large Language Models with Explanatory Prompting

Alexander Braverman · Weitong Zhang · Quanquan Gu

Keywords: [ large language model ] [ calibration ] [ hallucination ]

[ ] [ Project Page ]
Sat 14 Dec 3:45 p.m. PST — 4:30 p.m. PST

Abstract:

A growing concern with the use of Large Language Models (LLMs) is the presenceof hallucinated outputs. For tasks that require complex reasoning, hallucinationsmake LLMs unreliable and thus unsafe to deploy in a range of applications fromhealthcare to education. To combat this issue, we propose explanatory prompting,a methodology that gives an informal logical description of an algorithm neededto solve all instances of a given problem. To illustrate the use of explanatoryprompting, we consider a Graph Connectivity problem on directed acyclic graphs.We evaluate our approach by experiments on the Flight Connectivity dataset, aninstance of a Graph Connectivity problem (Zhang et al., 2023a). Our experimentsdemonstrate a decrease in hallucination rate from 44.8% in prior work to 1.8%using explanatory prompting. At the same time, we confirm that calibrated LLMsare bound to hallucinate by experimentally verifying a theoretical lower bound forhallucination (Kalai and Vempala, 2024).

Chat is not available.