Back to Search Start Over

Fusion of Multiple Behaviors Using Layered Reinforcement Learning.

Authors :
Hwang, Kao-Shing
Chen, Yu-Jen
Wu, Chun-Ju
Source :
IEEE Transactions on Systems, Man & Cybernetics: Part A. Jul2012, Vol. 42 Issue 4, p999-1004. 6p.
Publication Year :
2012

Abstract

This study introduces a method to enable a robot to learn how to perform new tasks through human demonstration and independent practice. The proposed process consists of two interconnected phases; in the first phase, state-action data are obtained from human demonstrations, and an aggregated state space is learned in terms of a decision tree that groups similar states together through reinforcement learning. Without the postprocess of trimming, in tree induction, the tree encodes a control policy that can be used to control the robot by means of repeatedly improving itself. Once a variety of behaviors is learned, more elaborate behaviors can be generated by selectively organizing several behaviors using another Q-learning algorithm. The composed outputs of the organized basic behaviors on the motor level are weighted using the policy learned through Q-learning. This approach uses three diverse Q-learning algorithms to learn complex behaviors. The experimental results show that the learned complicated behaviors, organized according to individual basic behaviors by the three Q-learning algorithms on different levels, can function more adaptively in a dynamic environment. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
10834427
Volume :
42
Issue :
4
Database :
Academic Search Index
Journal :
IEEE Transactions on Systems, Man & Cybernetics: Part A
Publication Type :
Academic Journal
Accession number :
76747198
Full Text :
https://doi.org/10.1109/TSMCA.2012.2183349