Back to Search
Start Over
Bellman residuals minimization using online support vector machines.
- Source :
- Applied Intelligence; Oct2017, Vol. 47 Issue 3, p670-704, 35p
- Publication Year :
- 2017
-
Abstract
- In this paper we present and theoretically study an Approximate Policy Iteration (API) method called A P I − B R M using a very effective implementation of incremental Support Vector Regression (SVR) to approximate the value function able to generalize Reinforcement Learning (RL) problems with continuous (or large) state space. A P I − B R M is presented as a non-parametric regularization method based on an outcome of the Bellman Residual Minimization (BRM) able to minimize the variance of the problem. The proposed method can be cast as incremental and may be applied to the on-line agent interaction framework of RL. Being also based on SVR which are based on convex optimization, is able to find the global solution of the problem. A P I − B R M using SVR can be seen as a regularization problem using 휖−insensitive loss. Compared to standard squared loss also used in regularization, this allows to naturally build a sparse solution for the approximation function. We extensively analyze the statistical properties of A P I − B R M founding a bound which controls the performance loss of the algorithm under some assumptions on the kernel and assuming that the collected samples are not-i.i.d. following a β−mixing process. Some experimental evidence and performance for well known RL benchmarks are also presented. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 0924669X
- Volume :
- 47
- Issue :
- 3
- Database :
- Complementary Index
- Journal :
- Applied Intelligence
- Publication Type :
- Academic Journal
- Accession number :
- 125026506
- Full Text :
- https://doi.org/10.1007/s10489-017-0910-7