Back to Search Start Over

Evaluating semi-cooperative Nash/Stackelberg Q-learning for traffic routes plan in a single intersection.

Authors :
Guo, Jian
Harmati, Istvan
Source :
Control Engineering Practice. Sep2020, Vol. 102, pN.PAG-N.PAG. 1p.
Publication Year :
2020

Abstract

As traffic congestion grows tremendous and frequent in the urban transportation system, many efficient models with reinforcement learning (RL) methods have already been proposed to optimize this situation. A multi-agent reinforcement learning (MARL) system can be constructed from the traffic problem, where the incoming links (i.e., sections) are regarded as agents and the actions made by the agents are for controlling signal lights. A semi-cooperative Nash Q-learning approach on the basis of single-agent Q-learning and Nash equilibrium is proposed and presented in this paper, in which the agents agree on the process of action selection by Nash equilibrium, but strive finally for a common goal with cooperative behaviour when more than one Nash equilibriums exist. Then an extended version called semi-cooperative Stackelberg Q-learning is designed to make a comparison, where Nash equilibrium is replaced by Stackelberg equilibrium in the Q-learning process. Specifically, the agent who has the largest queues will be promoted as a leader and the others are followers who react to the leader's decision. Instead of adjusting the plan of green light timing published in other research, this paper is contributing to finding the best multi-routes plan for passing most vehicles in a single traffic intersection, with combining game theory and RL in decision-making in the multi-agent framework. These two multi-agent Q-learning methods are implemented and compared with the constant strategy (i.e., the time intervals of green or red lights are fixed and periodical). The simulated result shows that the performance of semi-cooperative Stackelberg Q-learning is better. • The multi-agent control method optimizes traffic flow with routes plan. • Semi-cooperative Nash Q-learning is developed and examined. • Semi-cooperative Stackelberg Q-learning is extended by replacing Nash equilibrium with Stackelberg equilibrium. • Both proposed methods improve traffic flow compared with the constant strategy; Semi-cooperative Stackelberg Q-learning performs better. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09670661
Volume :
102
Database :
Academic Search Index
Journal :
Control Engineering Practice
Publication Type :
Academic Journal
Accession number :
145055269
Full Text :
https://doi.org/10.1016/j.conengprac.2020.104525