Skip to yearly menu bar Skip to main content


Poster
in
Workshop: The First Workshop on Large Foundation Models for Educational Assessment

Fusion-Eval: Integrating Assistant Evaluators with LLMs

Lei Shu · Nevan Wichers · Liangchen Luo · Yun Zhu · Yinxiao Liu · Jindong Chen · Lei Meng

[ ] [ Project Page ]
Sun 15 Dec 12:25 p.m. PST — 2 p.m. PST

Abstract:

Evaluating natural language systems poses significant challenges, particularly in the realms of natural language understanding and high-level reasoning. In this paper, we introduce ``Fusion-Eval'', an innovative approach that leverages Large Language Models (LLMs) to integrate insights from various assistant evaluators. The LLM is given the example to evaluate along with scores from the assistant evaluators. Each of these evaluators specializes in assessing distinct aspects of responses.Fusion-Eval achieves a 0.962 system-level Kendall-Tau correlation with humans on SummEval and a 0.744 turn-level Spearman correlation on TopicalChat, which is significantly higher than baseline methods. These results highlight Fusion-Eval's significant potential in the realm of natural language system evaluation.

Chat is not available.