Skip to yearly menu bar Skip to main content


Oral
in
Workshop: The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models

RetrievalAttention: Accelerating Long-Context LLM Inference via Vector Retrieval

Di Liu · Meng Chen · Baotong Lu · Huiqiang Jiang · Zhenhua Han · Qianxi Zhang · Qi Chen · Chengruidong Zhang · Bailu Ding · Kai Zhang · Chen Chen · Fan Yang · Yuqing Yang · Lili Qiu

Keywords: [ Efficient Inference ]

[ ] [ Project Page ]
Sat 14 Dec 1:30 p.m. PST — 1:36 p.m. PST

Abstract:

Transformer-based Large Language Models (LLMs) have become increasingly important. However, due to the quadratic time complexity of attention computation, scaling LLMs to longer contexts incurs extremely slow inference latency and high GPU memory consumption for caching key-value (KV) vectors. This paper proposes RetrievalAttention, a training-free approach to both accelerate attention computation and reduce GPU memory consumption. By leveraging the dynamic sparsity of attention mechanism, RetrievalAttention proposes to use approximate nearest neighbor search (ANNS) indexes for KV vectors in CPU memory and retrieves the most relevant ones with vector search during generation. Unfortunately, we observe that the off-the-shelf ANNS indexes are often ineffective for such retrieval tasks due to the out-of-distribution (OOD) between query vectors and key vectors in attention mechanism. RetrievalAttention addresses the OOD challenge by designing an attention-aware vector search algorithm that can adapt to the distribution of query vectors. Our evaluation shows that RetrievalAttention only needs to access 1--3% of data while maintaining high model accuracy. This leads to significant reduction in the inference cost of long-context LLMs with much lower GPU memory footprint. In particular, RetrievalAttention only needs a single NVIDIA RTX4090 (24GB) for serving 128K tokens in LLMs with 8B parameters, which is capable of generating one token in 0.188 seconds.

Chat is not available.