Skip to yearly menu bar Skip to main content


Poster
in
Workshop: Workshop on Machine Learning and Compression

Efficient Vocabulary Compression for Low-Resource Language Models

Sreeram Vennam · Anish Joishy · Ponnurangam Kumaraguru


Abstract:

We present a method to compress the final linear layer of language models, reducing memory usage by up to 3.4x without significant performance loss. By grouping tokens based on Byte Pair Encoding (BPE) merges, we prevent materialization of the memory-intensive logits tensor. Evaluations on the TinyStories dataset show that our method performs on par with GPT-Neo while significantly improving throughput by up to 3x, making it suitable for low-resource environments.

Chat is not available.