Back to Search Start Over

Cloud Computing-based Parallel Deep Reinforcement Learning Energy Management Strategy for Connected PHEVs.

Authors :
Tong Sun
Chao Ma
Zechun Li
Kun Yang
Source :
Engineering Letters. Jun2024, Vol. 32 Issue 6, p1210-1220. 11p.
Publication Year :
2024

Abstract

This paper proposes a novel cloud computingbased parallel deep reinforcement learning (DRL) energy management strategy (EMS) for connected plug-in hybrid vehicles. First, a proximal policy optimization (PPO) algorithm is developed. Since the cloud computing can reduce the computational burden of the connected vehicles, the PPO is deployed in the proposed cloud computing-based EMS. In order to improve the strategy adaptation, a parallel mechanism is proposed to achieve the information interaction with multiple vehicles. Considering the real-time control requirements, thread pool is proposed and applied in the cloud computing based parallel EMS. The thread pool-based strategy provides an efficient real-time control ability and strategy improvement solution. To verify the PPO based EMS, the dynamic programming, deep Q-network and double deep Q-network strategies are developed for comparison. It is found that the PPO can achieve similar fuel efficiency improvement with the DP strategy among the three DRL algorithms. For parallel training of multiple connected vehicles, the cloud computing-based parallel EMS improves fuel economy by approximately 7.7%. Threadpool based parallel real-time EMS reduces average time for computational interactions by 20% and further improves the fuel efficiency. The proposed strategy has the advantages of realtime control, adaptability and continuous learning for improved fuel efficiency. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
1816093X
Volume :
32
Issue :
6
Database :
Academic Search Index
Journal :
Engineering Letters
Publication Type :
Academic Journal
Accession number :
177619630