Poster
in
Workshop: Causality and Large Models
On LLM Augmented AB Experimentation
Shiv Shankar · Ritwik Sinha · Madalina Fiterau
Keywords: [ LLM ] [ article headlines ] [ clickthrough ] [ A/B Testing ]
Automated experimentation methods to evaluate user preferences and engagement is a key cornerstone in the current digital landscape. Most such systems rely on marketers and creators to design the content before deployment. However, with the advent of Large Language Models (LLMs) the feedback cycle is considerably shortened while the experimentation space expands significantly, necessitating novel and efficient ways to assess user engagement. In this paper, we experiment with using LLMs as simulators or treatment raters in an A/B testing application without running an A/B test.