Spotlight Poster
VMamba: Visual State Space Model
Yue Liu · Yunjie Tian · Yuzhong Zhao · Hongtian Yu · Lingxi Xie · Yaowei Wang · Qixiang Ye · Jianbin Jiao · Yunfan Liu
East Exhibit Hall A-C #2110
Designing computationally efficient network architectures remains an ongoing necessity in computer vision. In this paper, we adapt Mamba, a state-space language model, into VMamba, a vision backbone with linear time complexity. At the core of VMamba is a stack of Visual State-Space (VSS) blocks with the 2D Selective Scan (SS2D) module. By traversing along four scanning routes, SS2D bridges the gap between the ordered nature of 1D selective scan and the non-sequential structure of 2D vision data, which facilitates the collection of contextual information from various sources and perspectives. Based on the VSS blocks, we develop a family of VMamba architectures and accelerate them through a succession of architectural and implementation enhancements. Extensive experiments demonstrate VMamba’s promising performance across diverse visual perception tasks, highlighting its superior input scaling efficiency compared to existing benchmark models. Source code is available at https://github.com/MzeroMiko/VMamba
Live content is unavailable. Log in and register to view live content