Poster
Video-based Human-Object Interaction Detection from Tubelet Tokens
Danyang Tu · Wei Sun · Xiongkuo Min · Guangtao Zhai · Wei Shen
Hall J (level 1) #702
Keywords: [ tubelet token ] [ video-analysis ] [ compution vision ] [ transformer ] [ human-object interaction ]
Abstract:
We present a novel vision Transformer, named TUTOR, which is able to learn tubelet tokens, served as highly-abstracted spatial-temporal representations, for video-based human-object interaction (V-HOI) detection. The tubelet tokens structurize videos by agglomerating and linking semantically-related patch tokens along spatial and temporal domains, which enjoy two benefits: 1) Compactness: each token is learned by a selective attention mechanism to reduce redundant dependencies from others; 2) Expressiveness: each token is enabled to align with a semantic instance, i.e., an object or a human, thanks to agglomeration and linking. The effectiveness and efficiency of TUTOR are verified by extensive experiments. Results show our method outperforms existing works by large margins, with a relative mAP gain of $16.14\%$ on VidHOI and a 2 points gain on CAD-120 as well as a $4 \times$ speedup.
Chat is not available.