1. Driving in Real Life with Inverse Reinforcement Learning
- Author
-
Phan-Minh, Tung, Howington, Forbes, Chu, Ting-Sheng, Lee, Sang Uk, Tomov, Momchil S., Li, Nanxiang, Dicle, Caglayan, Findler, Samuel, Suarez-Ruiz, Francisco, Beaudoin, Robert, Yang, Bo, Omari, Sammy, and Wolff, Eric M.
- Subjects
Computer Science - Robotics ,Computer Science - Artificial Intelligence ,Computer Science - Machine Learning ,I.2.6 ,I.2.9 - Abstract
In this paper, we introduce the first learning-based planner to drive a car in dense, urban traffic using Inverse Reinforcement Learning (IRL). Our planner, DriveIRL, generates a diverse set of trajectory proposals, filters these trajectories with a lightweight and interpretable safety filter, and then uses a learned model to score each remaining trajectory. The best trajectory is then tracked by the low-level controller of our self-driving vehicle. We train our trajectory scoring model on a 500+ hour real-world dataset of expert driving demonstrations in Las Vegas within the maximum entropy IRL framework. DriveIRL's benefits include: a simple design due to only learning the trajectory scoring function, relatively interpretable features, and strong real-world performance. We validated DriveIRL on the Las Vegas Strip and demonstrated fully autonomous driving in heavy traffic, including scenarios involving cut-ins, abrupt braking by the lead vehicle, and hotel pickup/dropoff zones. Our dataset will be made public to help further research in this area.
- Published
- 2022