Skip to yearly menu bar Skip to main content


Oral Poster

Learning rigid-body simulators over implicit shapes for large-scale scenes and vision

Yulia Rubanova · Tatiana Lopez-Guevara · Kelsey Allen · Will Whitney · Kimberly Stachenfeld · Tobias Pfaff

[ ] [ Project Page ]
Fri 13 Dec 11 a.m. PST — 2 p.m. PST
 
Oral presentation: Oral Session 5A
Fri 13 Dec 10 a.m. PST — 11 a.m. PST

Abstract:

Simulating large scenes with many rigid objects is crucial for a variety of applications, such as robotics, engineering, film and video games. Rigid interactions are notoriously hard to model: small changes to the initial state or the simulation parameters can lead to large changes in the final state. Recently, learned simulators based on graph networks (GNNs) were developed as an alternative to hand-designed simulators like MuJoCo and Bullet. They are able to accurately capture dynamics of real objects directly from real-world observations. However, current state-of-the-art learned simulators operate on meshes and scale poorly to scenes with many objects or detailed shapes. Here we present SDF-Sim, the first learned rigid-body simulator designed for scale. We use learned signed-distance functions (SDFs) to represent the object shapes and to speed up distance computation. We design the simulator to leverage SDFs and avoid the fundamental bottleneck of the previous simulators associated with collision detection.For the first time in literature, we demonstrate that we can scale the GNN-based simulators to scenes with hundreds of objects and up to 1.1 million nodes, where mesh-based approaches run out of memory. Finally, we show that SDF-Sim can be applied to real world scenes by extracting SDFs from multi-view images.

Live content is unavailable. Log in and register to view live content