Poster
CultureLLM: Incorporating Cultural Differences into Large Language Models
Cheng Li · Mengzhuo Chen · Jindong Wang · Sunayana Sitaram · Xing Xie
[
Abstract
]
Fri 13 Dec 11 a.m. PST
— 2 p.m. PST
Abstract:
Large language models (LLMs) are reported to be partial to certain cultures due to the dominance of training data from English corpora. Since multilingual cultural data are often expensive to collect, existing efforts handle this by prompt engineering or culture-specific pre-training. However, they might overlook the knowledge deficiency of low-resource cultures and require extensive computing resources. In this paper, we propose CultureLLM, a cost-effective solution to incorporate cultural differences into LLMs. CultureLLM adopts World Value Survey (WVS) as seed data and generates semantically equivalent training data via the proposed semantic data augmentation. Using only $50$ seed samples from WVS with augmented data, we fine-tune culture-specific LLMs and a unified model (CultureLLM-One) for $9$ cultures covering rich and low-resource languages. Extensive experiments in $60$ culture-related datasets demonstrate that CultureLLM significantly outperforms various counterparts such as GPT-3.5 (by $8.1$\%) and Gemini Pro (by $9.5$\%) with performance comparable to GPT-4 or even better. Our human study shows that the generated samples are semantically equivalent to the original samples, providing an effective solution for LLMs augmentation.
Live content is unavailable. Log in and register to view live content