Skip to yearly menu bar Skip to main content


Oral
in
Workshop: The Fourth Workshop on Efficient Natural Language and Speech Processing (ENLSP-IV): Highlighting New Architectures for Future Foundation Models

Post-Training Statistical Calibration for Higher Activation Sparsity

Vui Seng Chua · Yujie Pan · Nilesh Jain · Vui Seng Chua

Keywords: [ Efficient Inference ]

[ ]
Sat 14 Dec 1:36 p.m. PST — 1:42 p.m. PST

Abstract:

We present Statistical Calibrated Activation Pruning (SCAP), a post-training activation pruning framework that (1) generalizes sparsification by input activations of Fully-Connected layers for generic and flexible application across Transformers, and (2) features a simple Mode-Centering technique to pre-calibrate activation distributions for maximizing post-training sparsity. Our results demonstrate robust Pareto efficiency compared to prior methods, translating to a 1.5× additional LLM decoding speedup against CATS[12] at iso model quality. SCAP effectiveness is empirically verified across a wide range of models, including recent Transformer Decoders, MoE, Mamba2, Encoding Transformer, and pre-quantized models, highlighting its practicality and scalability. The code is available at https://github.com/IntelLabs/SCAP.

Chat is not available.