Skip to yearly menu bar Skip to main content


Poster

ActFusion: a Unified Diffusion Model for Action Segmentation and Anticipation

Dayoung Gong · Suha Kwak · Minsu Cho

[ ]
Fri 13 Dec 4:30 p.m. PST — 7:30 p.m. PST

Abstract:

Temporal action segmentation and long-term action anticipation are two popular vision tasks for the temporal analysis of actions in videos. Despite apparent relevance and potential complementarity, these two problems have been investigated as separate and distinct tasks. In this work, we tackle these two problems, action segmentation and action anticipation, jointly using a unified diffusion model dubbed ActFusion. The key idea to unification is to train the model to effectively handle both visible and invisible parts of the sequence; the visible part is for observed frames of the video, and the invisible part is for future anticipation. To this end, we introduce a new anticipative masking strategy during training in which a late part of the video frames is masked as invisible and learnable tokens replace it to learn to predict the invisible future. Experimental results demonstrate bi-directional benefits between action segmentation and anticipation.ActFusion achieves the state-of-the-art performance across the standard benchmarks of 50 Salads, Breakfast, and GTEA, outperforming task-specific models in both of the two tasks.

Live content is unavailable. Log in and register to view live content