Skip to yearly menu bar Skip to main content


Poster

Referring Human Pose and Mask Estimation In the Wild

Bo Miao · Mingtao Feng · Zijie Wu · Mohammed Bennamoun · Yongsheng Gao · Ajmal Mian

[ ] [ Project Page ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

We introduce Referring Human Pose and Mask Estimation (R-HPM) in the wild, where either a text or positional prompt specifies the person of interest in an image. This new task holds significant potential for applications such as assistive robotics and sports analysis. In contrast to previous works, R-HPM (i) ensures high-quality, identity-aware results corresponding to the referred person, and (ii) simultaneously predicts human pose and mask for a comprehensive representation. To this end, we introduce a large-scale dataset named RefHuman, which extends the MS COCO dataset with additional text and positional prompt annotations. RefHuman includes over 50K annotated instances in the wild, each equipped with keypoint, mask, and prompt annotations. To enable prompt-conditioned estimation, we propose the first end-to-end promptable approach named UniPHD for R-HPM. UniPHD extracts multimodal representations and employs a proposed pose-centric hierarchical decoder to model (text or positional) instance queries and keypoint queries, producing results specific to the referred person. Extensive experiments demonstrate that UniPHD produces quality results based on prompts and achieves top-ranked performance on RefHuman val and MS COCO val2017. The anonymous download link for our RefHuman dataset is provided in the Appendix for review, and we will make both the dataset and the code public upon paper acceptance.

Live content is unavailable. Log in and register to view live content