Back to Search Start Over

An ensemble method for inverse reinforcement learning.

Authors :
Lin, Jin-Ling
Hwang, Kao-Shing
Shi, Haobin
Pan, Wei
Source :
Information Sciences. Feb2020, Vol. 512, p518-532. 15p.
Publication Year :
2020

Abstract

In inverse reinforcement learning (IRL), a reward function is learnt to generalize experts' behavior. This paper proposes a model-free IRL algorithm based on an ensemble method, where the reward function is regarded as a parametric function of expected features. In other words, the parameters are updated based on a weak classification method. The IRL is formulated as a problem of a boosting classifier, akin to the renowned Adaboost algorithm for classification, feature expectations from experts' demonstration, and the trajectory induced by an agent's current policy. The proposed approach takes individual feature expectation as attractor or expeller, depending on the sign of the residuals of the state trajectories between expert's demonstration and the one induced by RL with the currently approximated reward function, so as to tackle its central challenges of accurate inference, generalizability, and correctness of prior knowledge. Then, the proposed method is applied further to approximate an abstract reward function from observations of more complex behavior composed of several basic actions. The results of the simulations in a labyrinth are shown to validate the proposed algorithm. Furthermore, behaviors composed of a set of primitive actions on a soccer robot field are examined for the applicability of the proposed method. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00200255
Volume :
512
Database :
Academic Search Index
Journal :
Information Sciences
Publication Type :
Periodical
Accession number :
140092407
Full Text :
https://doi.org/10.1016/j.ins.2019.09.066