Skip to yearly menu bar Skip to main content


Poster

Segment Anything without Supervision

XuDong Wang · Jingfeng Yang · Trevor Darrell

[ ]
Wed 11 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

The Segmentation Anything Model (SAM) requires labor-intensive data labeling---over 20 minutes per image---which restricts its ability to scale to even larger training sets. In this paper, we present Unsupervised SAM (UnSAM), a "segment anything" model for interactive and automatic whole-image segmentation which does not require human annotations. UnSAM utilizes a divide-and-conquer strategy to "discover" the hierarchical structure of visual scenes. We first leverage top-down clustering methods to partition an unlabeled image into instance/semantic level segments. For all pixels within a segment, a bottom-up clustering method is employed to iteratively merge them into larger groups, thereby forming a hierarchical structure. These unsupervised multi-granular masks are then utilized to supervise model training. Evaluated across seven popular datasets, UnSAM achieves competitive results with the supervised counterpart SAM, and surpasses the previous state-of-the-art in unsupervised segmentation by 11% in terms of AR. Furthermore, we show that supervised SAM can also benefit from our self-supervised labels. By integrating our unsupervised pseudo masks into SA1B's ground-truth masks and training UnSAM with only 1% of SA-1B, a lightly semi-supervised UnSAM can often segment entities overlooked by supervised SAM, exceeding SAM's AR by over 6.7% and AP by 3.9% on SA-1B.

Live content is unavailable. Log in and register to view live content