Poster
What to Say and When to Say it: Live Fitness Coaching as a Testbed for Situated Interaction
Sunny Panchal · Apratim Bhattacharyya · Guillaume Berger · Antoine Mercier · Cornelius Böhm · Florian Dietrichkeit · Reza Pourreza · Xuanlin Li · Pulkit Madan · Mingu Lee · Mark Todorovich · Ingo Bax · Roland Memisevic
Tasks at the intersection of vision and language have had a profound impact in advancing the capabilities of vision-language models such as dialog-based assistants. However, models trained on existing tasks are limited to turn-based interactions, where each turn must be stepped (i.e., prompted) by the user. Open-ended, asynchronous interactions where an AI model may proactively deliver timely responses or feedback based on the unfolding situation in real-time are an open challenge. In this work, we present the FIT-Coach benchmark and dataset which explores human-AI interaction in the challenging, yet controlled, real-world domain of fitness coaching -- a task which intrinsically requires monitoring live user activity and providing timely feedback. Crucially, our dataset includes corrective feedbacks to address potential user mistakes and steer them towards successful workout completion. Our experiments reveal the limitations of existing state of the art vision-language models for such asynchronous situated interactions. Motivated by this, we propose a simple end-to-end streaming baseline that can respond asynchronously to user activity with appropriate feedbacks at the appropriate time.
Live content is unavailable. Log in and register to view live content