Poster
in
Workshop: Fine-Tuning in Modern Machine Learning: Principles and Scalability
Token Pruning using a Lightweight Background Aware Vision Transformer
Sudhakar Sah · Ravish Kumar · Honnesh Rohmetra · Ehsan Saboori
High runtime memory and high latency puts significant constraint on Vision Transformer training and inference, especially on edge devices. Token pruning reduces the number of input tokens to the ViT based on importance criteria of each token. We present Background Aware Vision Transformer (BAViT), a pre-processing block to object detection models like DETR/YOLOS, to reduce runtime memory and increase throughput by using a novel approach to identify background tokens in the image . The background tokens can be pruned completely or partially before feeding to a ViT based object detector. We use the semantic information provided by segmentation map and/or bounding box annotation to train a few layers of ViT to classify tokens to either foreground or background. Using 2 layers and 10 layers of BAViT, background and foreground tokens can be separated with 75% and 88% accuracy on VOC dataset and 71% and 80% accuracy on COCO dataset. We show that by using BAViT-small as preprocessor to YOLOS can increase the throughput by 30% - 40% with a mAP drop of around 3% without any sparse fine-tuning and less than 2% with sparse finetuning. Our approach is specifically targeted for Edge AI use cases. Code and data are available at~[Link].