Skip to yearly menu bar Skip to main content


Poster

Zipfian Whitening

Sho Yokoi · Han Bao · Hiroto Kurita · Hidetoshi Shimodaira

East Exhibit Hall A-C #1908
[ ]
[ Paper [ Poster [ OpenReview
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

The word embedding space in neural models is skewed, and correcting this can improve task performance.We point out that most approaches for modeling, correcting, and measuring the symmetry of an embedding space implicitly assume that the word frequencies are uniform; in reality, word frequencies follow a highly non-uniform distribution, known as Zipf's law.Surprisingly, simply performing PCA whitening weighted by the empirical word frequency that follows Zipf's law significantly improves task performance, surpassing established baselines.From a theoretical perspective, both our approach and existing methods can be clearly categorized: word representations are distributed according to an exponential family with either uniform or Zipfian base measures.By adopting the latter approach, we can naturally emphasize informative low-frequency words in terms of their vector norm, which becomes evident from the information-geometric perspective (Oyama et al., EMNLP 2023), and in terms of the loss functions for imbalanced classification (Menon et al. ICLR 2021).Additionally, our theory corroborates that popular natural language processing methods, such as skip-gram negative sampling (Mikolov et al., NIPS 2013), WhiteningBERT (Huang et al., Findings of EMNLP 2021), and headless language models (Godey et al., ICLR 2024), work well just because their word embeddings encode the empirical word frequency into the underlying probabilistic model.

Chat is not available.