Whole-Body Constrained Learning for Legged Locomotion via Hierarchical Optimization

Our proposed whole-body constrained learning framework enables agile locomotion over a wide range of challenging terrains while introducing constraints via hierarchical optimization.

Abstract

Reinforcement learning (RL) has demonstrated impressive performance in legged locomotion over various challenging environments. However, due to the sim-to-real gap and lack of explainability, unconstrained RL policies deployed in the real world still suffer from inevitable safety issues, such as joint collisions, excessive torque, or foot slippage in low-friction environments. These problems limit its usage in missions with strict safety requirements, such as planetary exploration, nuclear facility inspection, and deep-sea exploration, in which failure or unreliable locomotion will result in serious damage to human life, costly equipment, or mission objectives. In this paper, we design a hierarchical optimization-based whole-body follower, which integrates both hard and soft constraints into end-to-end RL to make the robot move with better security guarantees. Leveraging the advantages of model-based control, our approach allows for the definition of various types of hard and soft constraints during training or deployment, which allows for policy fine-tuning and mitigates the challenges of sim-to-real transfer. Meanwhile, it preserves the robustness of end-to-end RL when dealing with locomotion in complex unstructured environments. The trained policy with introduced constraints was deployed in a hexapod robot and tested in various outdoor environments, including snow-covered slopes and stairs, demonstrating the great traversability and safety of our approach.

Video

Outdoor experiments

Ice rink

Snow covered slope

Soft sand

Pipeline

Real-world comparison experiments

PD follower

WBC follower

F-T interaction constraints

Kinematic constraints