Skip to yearly menu bar Skip to main content


Spotlight
in
Workshop: Socially Responsible Language Modelling Research (SoLaR)

On Demonstration Selection for Improving Fairness in Language Models

Song Wang · Peng Wang · Yushun Dong · Tong Zhou · Lu Cheng · Yangfeng Ji · Jundong Li

Keywords: [ In-Context Learning ] [ Large Language Models ] [ Bias ] [ Fairness ]


Abstract:

Recently, there has been a surge in deploying Large Language Models (LLMs) for decision-making tasks, such as income prediction and crime risk assessments. Due to the bias encoded in the pre-training data, LLMs usually exhibit unfairness and discrimination against underprivileged groups. However, traditional fairness enhancement methods are generally impractical for LLMs due to the computational cost of fine-tuning and the black-box nature of powerful LLMs. To deal with this, In-Context Learning (ICL) offers a promising strategy for enhancing LLM fairness through demonstrations, without extensive retraining. However, the efficacy of ICL is hindered by the inherent bias in both data and the LLM itself, leading to the potential exaggeration of existing societal disparities. In this study, we investigate the unfairness issue in LLMs and propose a novel demonstration selection strategy to address data and model biases in LLMs. Extensive experiments on various tasks and datasets validate the superiority of our strategy.

Chat is not available.