1. Automatic learning of cyclist’s compliance for speed advice at intersections - a reinforcement learning-based approach
- Author
-
Andreas Hegyi, Serge P. Hoogendoorn, and Azita Dabiri
- Subjects
050210 logistics & transportation ,0209 industrial biotechnology ,business.industry ,Computer science ,Computation ,media_common.quotation_subject ,05 social sciences ,02 engineering and technology ,Machine learning ,computer.software_genre ,020901 industrial engineering & automation ,Order (exchange) ,0502 economics and business ,Reinforcement learning ,Artificial intelligence ,business ,Function (engineering) ,Advice (complexity) ,computer ,media_common - Abstract
Although there exists algorithms that give speed advice for cyclists when approaching traffic lights with uncertainty in the timing, they all need to know, and thus assume, the cyclist’s response to the advice in order to be able to optimize the advice. To relax this assumption, in this paper an algorithm is proposed that combines reinforcement learning and planning to learn the reaction of cyclist to the advice and deploys this information for planning the best next advice on-the-fly. Rather than a single search procedure, which is conventional in the existing architectures, two sample-based search procedures are suggested to be used in the algorithm. This makes it possible to obtain an accurate local approximation of the action-value function, in spite of the short computation time that is available in each decision epoch. The algorithm is tested in a simulation case study where the impact of a proper initialisation of action-value function as well as the importance of using two search procedures are affirmed.
- Published
- 2019
- Full Text
- View/download PDF