Back to Search Start Over

Learning When to Switch: Composing Controllers to Traverse a Sequence of Terrain Artifacts

Authors :
Tidd, Brendan
Hudson, Nicolas
Cosgun, Akansel
Leitner, Jurgen
Source :
In proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
Publication Year :
2020

Abstract

Legged robots often use separate control policiesthat are highly engineered for traversing difficult terrain suchas stairs, gaps, and steps, where switching between policies isonly possible when the robot is in a region that is commonto adjacent controllers. Deep Reinforcement Learning (DRL)is a promising alternative to hand-crafted control design,though typically requires the full set of test conditions to beknown before training. DRL policies can result in complex(often unrealistic) behaviours that have few or no overlappingregions between adjacent policies, making it difficult to switchbehaviours. In this work we develop multiple DRL policieswith Curriculum Learning (CL), each that can traverse asingle respective terrain condition, while ensuring an overlapbetween policies. We then train a network for each destinationpolicy that estimates the likelihood of successfully switchingfrom any other policy. We evaluate our switching methodon a previously unseen combination of terrain artifacts andshow that it performs better than heuristic methods. Whileour method is trained on individual terrain types, it performscomparably to a Deep Q Network trained on the full set ofterrain conditions. This approach allows the development ofseparate policies in constrained conditions with embedded priorknowledge about each behaviour, that is scalable to any numberof behaviours, and prepares DRL methods for applications inthe real world

Details

Database :
arXiv
Journal :
In proceedings IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2021
Publication Type :
Report
Accession number :
edsarx.2011.00440
Document Type :
Working Paper