Back to Search Start Over

Interpretable policies for reinforcement learning by empirical fuzzy sets.

Authors :
Huang, Jianfeng
Angelov, Plamen P.
Yin, Chengliang
Source :
Engineering Applications of Artificial Intelligence. May2020, Vol. 91, pN.PAG-N.PAG. 1p.
Publication Year :
2020

Abstract

This paper proposes a method and an algorithm to implement interpretable fuzzy reinforcement learning (IFRL). It provides alternative solutions to common problems in RL, like function approximation and continuous action space. The learning process resembles that of human beings by clustering the encountered states, developing experiences for each of the typical cases, and making decisions fuzzily. The learned policy can be expressed as human-intelligible IF-THEN rules, which facilitates further investigation and improvement. It adopts the actor–critic architecture whereas being different from mainstream policy gradient methods. The value function is approximated through the fuzzy system AnYa. The state–action space is discretized into a static grid with nodes. Each node is treated as one prototype and corresponds to one fuzzy rule, with the value of the node being the consequent. Values of consequents are updated using the Sarsa(λ) algorithm. Probability distribution of optimal actions regarding different states is estimated through Empirical Data Analytics (EDA), Autonomous Learning Multi-Model Systems (ALMMo), and Empirical Fuzzy Sets (ε FS). The fuzzy kernel of IFRL avoids the lack of interpretability in other methods based on neural networks. Simulation results with four problems, namely Mountain Car, Continuous Gridworld, Pendulum Position, and Tank Level Control, are presented as a proof of the proposed concept. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09521976
Volume :
91
Database :
Academic Search Index
Journal :
Engineering Applications of Artificial Intelligence
Publication Type :
Academic Journal
Accession number :
142769854
Full Text :
https://doi.org/10.1016/j.engappai.2020.103559