Skip to yearly menu bar Skip to main content


Poster

Block Transformer: Global-to-Local Language Modeling for Fast Inference

Namgyu Ho · Sangmin Bae · Taehyeon Kim · Hyunjik Jo · Yireun Kim · Tal Schuster · Adam Fisch · James Thorne · Se-Young Yun

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

This paper presents the Block Transformer architecture which adopts hierarchical global-to-local modeling to autoregressive transformers to mitigate the inference bottlenecks of self-attention. To apply self-attention, the key-value (KV) cache of prompt tokens must be prefilled, and then retrieved from memory at every decoding step. Thereby, this KV cache IO becomes a significant bottleneck in batch inference. We notice that these costs stem from applying self-attention on the global context, therefore we isolate the expensive bottlenecks of global modeling to lower layers and apply fast local modeling in upper layers. To mitigate the costs in the lower layers, we aggregate input tokens into fixed size blocks and apply self-attention at a coarse level. Context information is aggregated into a single embedding to enable upper layers to decode the next block of tokens, without global attention. Free of global attention bottlenecks, the upper layers can fully utilize the compute hardware to maximize inference throughput. By leveraging the global and local modules, the Block Transformer architecture demonstrates 10--20x gains in inference throughput compared to vanilla transformers with equivalent perplexity. Our work demonstrates a new approach to optimizing language model inference through novel application of global-to-local modeling.

Live content is unavailable. Log in and register to view live content