Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Statistical Frontiers in LLMs and Foundation Models

HuLLMI: HUMAN VS. LLM IDENTIFICATION WITH EXPLAINABILITY

Prathamesh Dinesh Joshi · Sahil Pocker · Raj Dandekar · Rajat Dandekar · Sreedath Panat

Keywords: [ Classification ] [ Text detection ] [ NLP Detectors ] [ Human vs AI ] [ Explainable AI ] [ LLMs ]

[ ] [ Project Page ]
Sat 14 Dec 3:45 p.m. PST — 4:30 p.m. PST

Abstract:

As LLMs become increasingly proficient at producing human-like responses, there has been a rise of academic and industrial pursuits dedicated to flagging a given piece of text as "human" or "AI". Most of these pursuits involve modern NLP detectors like T5-Sentinel and RoBERTa-Sentinel, without paying too much attention to issues of interpretability and explainability of these models. In our study, we provide a comprehensive analysis that shows that traditional ML models (Naive-Bayes, MLP, Random Forests, XGBoost) perform as well as modern NLP detectors, in human vs AI text detection. We achieve this by implementing a robust testing procedure on diverse datasets, including curated corpora and real-world samples. Subsequently, by employing the explainable AI technique LIME, we uncover parts of the input that contribute most to a model’s prediction, providing insights into the detection process. Our study contributes to the growing need for developing production-level LLM detection tools, which can leverage a wide range of traditional as well as modern NLP detectors we propose. Finally, the LIME techniques we demonstrate also have the potential to equip these detection tools with interpretability analysis features, making them more reliable and trustworthy in various domains like education, healthcare, and media.

Chat is not available.