Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Language Gamification

Evaluating the role of ‘Constitutions’ for learning from AI feedback

Saskia Redgate · Andrew M. Bean · Adam Mahdi


Abstract:

The growing capabilities of large language models (LLMs) have led to their use as substitutes for human feedback for training and assessing other LLMs. These methods often rely on `constitutions', written guidelines which a critic model uses to provide feedback and improve generations. We investigate how the choice of constitution affects feedback quality by using four different constitutions to improve patient-centered communication in medical interviews. In pairwise comparisons conducted by 215 human raters, we found that detailed constitutions led to better results regarding emotive qualities. However, none of the constitutions outperformed the baseline in learning more practically-oriented skills related to information gathering and provision. Our findings indicate that while detailed constitutions should be prioritised, there are possible limitations to the effectiveness AI feedback as a reward signal in certain areas.

Chat is not available.