1. COMBINING CORRELATION-BASED AND REWARD-BASED LEARNING IN NEURAL CONTROL FOR POLICY IMPROVEMENT.
- Author
-
MANOONPONG, PORAMATE, KOLODZIEJSKI, CHRISTOPH, WÖRGÖTTER, FLORENTIN, and MORIMOTO, JUN
- Subjects
- *
ARTIFICIAL neural networks , *REINFORCEMENT learning , *ARTIFICIAL intelligence , *MATHEMATICAL models , *PERFORMANCE evaluation , *COMPUTER simulation , *MOTION control devices - Abstract
Classical conditioning (conventionally modeled as correlation-based learning) and operant conditioning (conventionally modeled as reinforcement learning or reward-based learning) have been found in biological systems. Evidence shows that these two mechanisms strongly involve learning about associations. Based on these biological findings, we propose a new learning model to achieve successful control policies for artificial systems. This model combines correlation-based learning using input correlation learning (ICO learning) and reward-based learning using continuous actor-critic reinforcement learning (RL), thereby working as a dual learner system. The model performance is evaluated by simulations of a cart-pole system as a dynamic motion control problem and a mobile robot system as a goal-directed behavior control problem. Results show that the model can strongly improve pole balancing control policy, i.e., it allows the controller to learn stabilizing the pole in the largest domain of initial conditions compared to the results obtained when using a single learning mechanism. This model can also find a successful control policy for goal-directed behavior, i.e., the robot can effectively learn to approach a given goal compared to its individual components. Thus, the study pursued here sharpens our understanding of how two different learning mechanisms can be combined and complement each other for solving complex tasks. [ABSTRACT FROM AUTHOR]
- Published
- 2013
- Full Text
- View/download PDF