Skip to yearly menu bar Skip to main content


Poster

SuperVLAD: Compact and Robust Image Descriptors for Visual Place Recognition

Feng Lu · Xinyao Zhang · Canming Ye · Shuting Dong · Lijun Zhang · Xiangyuan Lan · Chun Yuan

[ ]
Thu 12 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

Visual place recognition (VPR) is an essential task for multiple applications such as augmented reality and robot localization. Over the past decade, mainstream methods in the VPR area have been to use feature representation based on global aggregation, as exemplified by NetVLAD. These features are suitable for large-scale VPR and robust against viewpoint changes. However, the VLAD-based aggregation methods usually learn a large number of (e.g., 64) clusters and their corresponding cluster centers, which directly leads to a high dimension of the yielded global features. More importantly, when there is a domain gap between the data in training and inference, the cluster centers determined on the training set are usually improper for inference, resulting in a performance drop. To this end, we first attempt to improve NetVLAD by removing the cluster center and setting only a small number of (e.g., only 4) clusters. The proposed method not only simplifies NetVLAD but also enhances the generalizability across different domains. We name this method SuperVLAD. In addition, by introducing ghost clusters that will not be retained in the final output, we further propose a very low-dimensional 1-Cluster VLAD descriptor, which has the same dimension as the output of GeM pooling but performs notably better. Experimental results suggest that, when paired with a transformer-based backbone, our SuperVLAD outperforms NetVLAD with significantly more compact features and better domain generalization performance. The proposed method also surpasses state-of-the-art methods with lower feature dimensions on several benchmark datasets. The code will be publicly available.

Live content is unavailable. Log in and register to view live content