Skip to yearly menu bar Skip to main content


Poster

MetaUAS: Universal Anomaly Segmentation with One-Prompt Meta-Learning

Bin-Bin Gao

[ ] [ Project Page ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Zero/few-shot anomaly segmentation methods rely on powerful vision-language models capable of detecting unseen anomalies using textual prompts. We know that visual representations are not dependent on language. In this paper, we explore how far we can go with visual information alone without any guidance from language for universal anomaly segmentation. We rethink anomaly segmentation and find that it can be unified into change segmentation. This paradigm shift allows us to synthesize large-scale image pairs with object-level and local region changes from available image datasets, which are agnostic to targeted anomaly segmentation. We propose a Universal Anomaly Segmentation framework (MetaUAS) that can be trained in a one-prompt Meta-learning manner. To handle geometrical variations between prompt and query images, we proposed a soft feature alignment module that bridges change perception and semantic segmentation. This is the first time to allow us to use various sophisticated semantic segmentation to boost anomaly segmentation. Our method effectively and efficiently segments unseen anomalies with only one normal image prompt and enjoys training-free without guidance from language. We will make the code and models of MetaUAS available.

Live content is unavailable. Log in and register to view live content