Skip to yearly menu bar Skip to main content


Poster

Localize, Understand, Collaborate: Semantic-Aware Dragging via Intention Reasoner

Xing Cui · Peipei Li · Zekun Li · Xuannan Liu · Yueying Zou · Zhaofeng He

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Flexible and accurate drag-based editing is a challenging task that has recently garnered significant attention. Current methods typically model this problem as automatically learning “how to drag” through point dragging and often produce one deterministic estimation, which presents two key limitations: 1) Overlooking the inherently ill-posed nature of drag-based editing, where multiple results may correspond to a given input, as illustrated in Fig.1; 2) Ignoring the constraint of image quality, which may lead to unexpected distortion.To alleviate this, we propose LucidDrag, which shifts the focus from "how to drag" to a paradigm of "what-then-how." LucidDrag comprises an intention reasoner and a collaborative guidance sampling mechanism. The former infers several optimal editing strategies, identifying what content and what semantic direction to be edited. Based on the former, the latter addresses "how to drag" by collaboratively integrating existing editing guidance with the newly proposed semantic guidance and quality guidance.Specifically, semantic guidance is derived by establishing a semantic editing direction based on reasoned intentions, while quality guidance is achieved through classifier guidance using an image fidelity discriminator.Both qualitative and quantitative comparisons demonstrate the superiority of LucidDrag over previous methods. The code will be released.

Live content is unavailable. Log in and register to view live content