Back to Search Start Over

Bio-Inspired Collision Avoidance in Swarm Systems via Deep Reinforcement Learning.

Authors :
Na, Seongin
Niu, Hanlin
Lennox, Barry
Arvin, Farshad
Source :
IEEE Transactions on Vehicular Technology. Mar2022, Vol. 71 Issue 3, p2511-2526. 16p.
Publication Year :
2022

Abstract

Autonomous vehicles have been highlighted as a major growth area for future transportation systems and the deployment of large numbers of these vehicles is expected when safety and legal challenges are overcome. To meet the necessary safety standards, effective collision avoidance technologies are required to ensure that the number of accidents are kept to a minimum. As large numbers of autonomous vehicles, operating together on roads, can be regarded as a swarm system, we propose a bio-inspired collision avoidance strategy using virtual pheromones; an approach that has evolved effectively in nature over many millions of years. Previous research using virtual pheromones showed the potential of pheromone-based systems to maneuver a swarm of robots. However, designing an individual controller to maximise the performance of the entire swarm is a major challenge. In this paper, we propose a novel deep reinforcement learning (DRL) based approach that is able to train a controller that introduces collision avoidance behaviour. To accelerate training, we propose a novel sampling strategy called Highlight Experience Replay and integrate it with a Deep Deterministic Policy Gradient algorithm with noise added to the weights and biases of the artificial neural network to improve exploration. To evaluate the performance of the proposed DRL-based controller, we applied it to navigation and collision avoidance tasks in three different traffic scenarios. The experimental results showed that the proposed DRL-based controller outperformed the manually-tuned controller in terms of stability, effectiveness, robustness and ease of tuning process. Furthermore, the proposed Highlight Experience Replay method outperformed than the popular Prioritized Experience Replay sampling strategy by taking 27% of training time average over three stages. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00189545
Volume :
71
Issue :
3
Database :
Academic Search Index
Journal :
IEEE Transactions on Vehicular Technology
Publication Type :
Academic Journal
Accession number :
155866949
Full Text :
https://doi.org/10.1109/TVT.2022.3145346