Skip to yearly menu bar Skip to main content


Poster

Flaws can be Applause: Unleashing Potential of Segmenting Ambiguous Objects in SAM

Chenxin Li · Yuzhihuang · WUYANG LI · Hengyu Liu · Xinyu Liu · Qing Xu · Zhen Chen · Yue Huang · Yixuan Yuan

[ ]
Thu 12 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

As the vision foundation models like the Segment Anything Model (SAM) demonstrate potent universality, they also present challenges in giving ambiguous and uncertain predictions. Significant variations in the model's output and granularity can occur with simply subtle changes in the prompt, contradicting the consensus requirement for the robustness for a model. While some established work have been dedicated to stabilize and fortify the prediction of SAM, this paper takes a unique path to explore how this flaw can be inverted into an advantage when modeling inherently ambiguous data distributions. We introduce an optimization framework based on a conditional variational autoencoder, which jointly models the prompt and the granularity of the object with a latent probability distribution. This approach enables the model adaptively perceiving and representing the real ambiguous label distribution, taming SAM to produce a series of diverse, convincing, and reasonable segmentation outputs controllably. Extensive experiments on several practical deployment scenarios involving ambiguity demonstrates the exceptional performance of our framework.

Live content is unavailable. Log in and register to view live content