1. Single Trajectory Learning: Exploration Versus Exploitation.
- Author
-
Fu, Qiming, Liu, Quan, Zhong, Shan, Luo, Heng, Wu, Hongjie, and Chen, Jianping
- Subjects
REINFORCEMENT learning ,PROBLEM solving ,DISTRIBUTION (Probability theory) ,MACHINE learning ,BAYESIAN analysis ,ALGORITHMS - Abstract
In reinforcement learning (RL), the exploration/exploitation (E/E) dilemma is a very crucial issue, which can be described as searching between the exploration of the environment to find more profitable actions, and the exploitation of the best empirical actions for the current state. We focus on the single trajectory RL problem where an agent is interacting with a partially unknown MDP over single trajectories, and try to deal with the E/E in this setting. Given the reward function, we try to find a good E/E strategy to address the MDPs under some MDP distribution. This is achieved by selecting the best strategy in mean over a potential MDP distribution from a large set of candidate strategies, which is done by exploiting single trajectories drawn from plenty of MDPs. In this paper, we mainly make the following contributions: (1) We discuss the strategy-selector algorithm based on formula set and polynomial function. (2) We provide the theoretical and experimental regret analysis of the learned strategy under an given MDP distribution. (3) We compare these methods with the 'state-of-the-art' Bayesian RL method experimentally. [ABSTRACT FROM AUTHOR]
- Published
- 2018
- Full Text
- View/download PDF