Skip to yearly menu bar Skip to main content


Poster

KptLLM: Unveiling the Power of Large Language Model for Keypoint Comprehension

Jie Yang · Wang ZENG · Sheng Jin · Lumin Xu · Wentao Liu · Chen Qian · Ruimao Zhang

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Recent advancements in Multimodal Large Language Models (MLLMs) have significantly enhanced their capabilities in vision-language tasks like image captioning and question answering. However, these models lack proficiency in the fine-grained perceptual task, i.e., keypoint comprehension. To bridge this gap, we introduce the novel challenge of Semantic Keypoint Comprehension, which aims to understand keypoints within different human-AI interaction contexts, including keypoint semantic understanding, visual prompt-based keypoint detection, and textual prompt-based keypoint detection. Moreover, we introduce KptLLM, a unified multimodal model that utilizes an identify-then-detect strategy to effectively address these challenges. KptLLM underscores the initial discernment of semantics in keypoints, followed by the precise determination of their positions through a chain-of-thought process. With several carefully designed modules, KptLLM adeptly handles various modality inputs, facilitating the interpretation of both semantic contents and keypoint locations. Our extensive experiments demonstrate KptLLM's superiority in various keypoint detection benchmarks and its unique semantic capabilities in interpreting keypoints. Codes and models will be released to facilitate future research.

Live content is unavailable. Log in and register to view live content