Skip to yearly menu bar Skip to main content


Poster

Doing Experiments and Revising Rules with Natural Language and Probabilistic Reasoning

Top Piriyakulkij · Cassidy Langenfeld · Tuan Anh Le · Kevin Ellis

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

We give a model of how to infer hidden natural language rules by doing experiments.The model integrates Large Language Models (LLMs) with Monte Carlo algorithms for probabilistic inference, interleaving online belief updates with experiment design under information-theoretic criteria.We conduct a human-model comparison on a Zendo-style task, finding that a critical ingredient for modeling the human data is to assume that humans also consider fuzzy, probabilistic rules, in addition to assuming that humans perform approximately-Bayesian belief updates.We also compare with recent algorithms for using LLMs to generate and revise hypotheses, finding that our online inference method yields higher accuracy at recovering the true underlying rule, and provides better support for designing optimal experiments.Collectively these results help understand the strengths and weaknesses of LLMs as `intuitive experimentals', and show where they deviate---and agree---with the behavior of humans in simple experimental paradigms.

Live content is unavailable. Log in and register to view live content