Invited Talk
in
Workshop: 3rd Robot Learning Workshop
Invited Talk - "Learning-based Control of a Legged Robot"
Jemin Hwangbo · JooWoong Byun
Legged robots pose one of the greatest challenges in robotics. Dynamic and agile maneuvers of animals cannot be imitated by existing methods that are crafted by humans. A compelling alternative is reinforcement learning, which requires minimal craftsmanship and promotes the natural evolution of a control policy. However, so far, reinforcement learning research for legged robots is mainly limited to simulation, and only few and comparably simple examples have been deployed on real systems. The primary reason is that training with real robots, particularly with dynamically balancing systems, is complicated and expensive. Recent algorithmic improvements have made simulation even cheaper and more accurate at the same time. Leveraging such tools to obtain control policies is thus a seemingly promising direction. However, a few simulation-related issues have to be addressed before utilizing them in practice. The biggest obstacle is the so-called reality gap -- discrepancies between the simulated and the real system. Hand-crafted models often fail to achieve a reasonable accuracy due to the complexities of actuation systems of existing robots. This talk will focus on how such obstacles can be overcome. The main approaches are twofold: a fast and accurate algorithm for solving contact dynamics and a data-driven simulation-augmentation method using deep learning. These methods are applied to the ANYmal robot, a sophisticated medium-dog-sized quadrupedal system. Using policies trained in simulation, the quadrupedal machine achieves locomotion skills that go beyond what had been achieved with prior methods: ANYmal is capable of precisely and energy-efficiently following high-level body velocity commands, running faster than ever before, and recovering from falling even in complex configurations.