Skip to yearly menu bar Skip to main content


Poster

Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning

Top Piriyakulkij · Cassidy Langenfeld · Tuan Anh Le · Kevin Ellis

East Exhibit Hall A-C #3904
[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We give a model of how to infer natural language rules by doing experiments. Themodel integrates Large Language Models (LLMs) with Monte Carlo algorithms forprobabilistic inference, interleaving online belief updates with experiment designunder information-theoretic criteria. We conduct a human-model comparison on aZendo-style task, finding that a critical ingredient for modeling the human data is toassume that humans also consider fuzzy, probabilistic rules, in addition to assumingthat humans perform approximately-Bayesian belief updates. We also comparewith recent algorithms for using LLMs to generate and revise hypotheses, findingthat our online inference method yields higher accuracy at recovering the trueunderlying rule, and provides better support for designing optimal experiments.

Live content is unavailable. Log in and register to view live content