Spotlight
in
Workshop: Gaze meets ML
Intention Estimation via Gaze for Robot Guidance in Hierarchical Tasks
Yifan SHEN · Xiaoyu Mo · Vytas Krisciunas · David Hanson · Bertram Shi
Keywords: [ Humanoid Robot ] [ Guidance ] [ gaze ] [ Intention Estimation ] [ Hierarchical Task ]
To provide effective guidance to a human agent performing hierarchical tasks, a robot must determine the level at which to provide guidance. This relies on estimating the agent's intention at each level of the hierarchy. Unfortunately, observations of task-related movements provide direct information about intention only at the lowest level. In addition, lower level tasks may be shared. The resulting ambiguity impairs timely estimation of higher level intent. This can be resolved by incorporating observations of secondary behaviors like gaze. We propose a probabilistic framework enabling robot guidance in hierarchical tasks via intention estimation from observations of both task-related movements and eye gaze. Experiments with a virtual humanoid robot demonstrate that gaze is a very powerful cue that largely compensates for simplifying assumptions made in modelling task-related movements, enabling a robot controlled by our framework to nearly match the performance of a human wizard. We examine the effect of gaze in improving both the precision and timeliness of guidance cue generation, finding that while both improve with gaze, improvements in timeliness are more significant. Our results suggest that gaze observations are critical in achieving natural and fluid human-robot collaboration, which may enable human agents to undertake significantly more complex tasks and perform them more safely and effectively, than possible without guidance.