Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Bayesian Decision-making and Uncertainty: from probabilistic and spatiotemporal modeling to sequential experiment design

Had enough of experts? Elicitation and evaluation of Bayesian priors from large language models

David Antony Selby · Kai Spriestersbach · Yuichiro Iwashita · Dennis Bappert · Archana Warrier · Sumantrak Mukherjee · Muhammad Asim · Koichi Kise · Sebastian Vollmer

Keywords: [ bayesian modelling ] [ prior elicitation ] [ expert systems ] [ Large language models ] [ machine learning ] [ Prompt Engineering ]


Abstract:

Large language models (LLMs) have been extensively studied for their abilities to generate convincing natural language sequences, however their utility for quantitative information retrieval is less well understood. Here we explore the feasibility of LLMs as a mechanism for quantitative knowledge retrieval to aid elicitation of expert-informed prior distributions for Bayesian statistical models. We present a prompt engineering framework, treating an LLM as an interface to scholarly literature, comparing responses in different contexts and domains against more established approaches. We discuss the implications and challenges of treating LLMs as 'experts'.

Chat is not available.