1. From Pixels to Torques: Policy Learning with Deep Dynamical Models
- Author
-
Wahlström, N, Schön, TB, and Deisenroth, MP
- Subjects
FOS: Computer and information sciences ,Computer Science::Machine Learning ,Computer Science - Learning ,Computer Science - Robotics ,Statistics - Machine Learning ,Signal Processing ,FOS: Electrical engineering, electronic engineering, information engineering ,Computer Science - Systems and Control ,Machine Learning (stat.ML) ,Signalbehandling ,Systems and Control (eess.SY) ,Robotics (cs.RO) ,Machine Learning (cs.LG) - Abstract
01.04.15 KB. Ok to add report ot spiral, authors retain copyright, Data-efficient learning in continuous state-action spaces using very high-dimensional observations remains a key challenge in developing fully autonomous systems. In this paper, we consider one instance of this challenge, the pixels to torques problem, where an agent must learn a closed-loop control policy from pixel information only. We introduce a data-efficient, model-based reinforcement learning algorithm that learns such a closed-loop policy directly from pixel information. The key ingredient is a deep dynamical model that uses deep auto-encoders to learn a low-dimensional embedding of images jointly with a predictive model in this low-dimensional feature space. Joint learning ensures that not only static but also dynamic properties of the data are accounted for. This is crucial for long-term predictions, which lie at the core of the adaptive model predictive control strategy that we use for closed-loop control. Compared to state-of-the-art reinforcement learning methods for continuous states and actions, our approach learns quickly, scales to high-dimensional state spaces and is an important step toward fully autonomous learning from pixels to torques.
- Published
- 2015