Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

LLM Hallucination Reasoning with Zero-shot Knowledge Test

Seongmin Lee · Hsiang Hsu · Richard Chen

Keywords: [ LLM ] [ Hallucination ] [ Hallucination Reasoning ]


Abstract:

LLM hallucination, where LLMs occasionally generate unfaithful text, poses significant challenges for practical applications of LLMs. Most existing detection methods require external knowledge, LLM fine-tuning, or hallucination-labeled datasets and do not distinguish between different hallucination types, which are crucial for improving detection performance. We introduce a new task, Hallucination Reasoning, which classifies LLM-generated text into one of aligned, misaligned, and fabricated. Our novel source-free zero-shot method identifies whether LLM has enough knowledge about a prompt and text. Our experiments on new datasets demonstrate the effectiveness of our method in hallucination reasoning and underscore its importance for enhancing detection performance.

Chat is not available.