1. Deep stochastic reinforcement learning-based energy management strategy for fuel cell hybrid electric vehicles.
- Author
-
Jouda, Basel, Jobran Al-Mahasneh, Ahmad, and Mallouh, Mohammed Abu
- Subjects
- *
DEEP reinforcement learning , *FUEL cell vehicles , *HYBRID electric vehicles , *REINFORCEMENT learning , *FUEL cells , *ENERGY management , *EPISTEMIC uncertainty - Abstract
• Deep stochastic reinforcement learning based approach to address this issue of epistemic uncertainty in a midsize fuel cell hybrid electric vehicle. • The performance of the proposed approach is benchmarked against Double Deep Q-Network (DDQN), Power Follower Controller (PFC) and Fuzzy Logic Controller (FLC). • Using New York City cycle as a validation drive cycle, the approach improves fuel economy by 7.68%, 13.53%, and 10% compared to DDQN, PFC, and FLC, respectively. • The deep REINFORCE approach improves fuel economy by 5.31 %,9.78 %, and 9.93 % compared to DDQN, PFC, and FLC, respectively under another validation cycle, Amman cycle. • The proposed method has 38% less training time when compared to the DDQN approach. Fuel cell hybrid electric vehicles offer a promising solution for sustainable and environment friendly transportation, but they necessitate efficient energy management strategies (EMSs) to optimize their fuel economy. However, designing an optimal leaning-based EMS becomes challenging in the presence of limited training data. This paper presents a deep stochastic reinforcement learning based approach to address this issue of epistemic uncertainty in a midsize fuel cell hybrid electric vehicle. The approach introduces a deep REINFORCE framework with a deep neural network baseline and entropy regularization to develop a stochastic policy for EMS. The performance of the proposed approach is benchmarked against three EMSs: i) a state-of- art deep deterministic reinforcement learning technique called Double Deep Q-Network (DDQN), Power Follower Controller (PFC) and Fuzzy Logic Controller (FLC). Using New York City cycle as a validation drive cycle, the deep REINFORCE approach improves fuel economy by 7.68%, 13.53%, and 10% compared to DDQN, PFC, and FLC, respectively. The deep REINFORCE approach improves fuel economy by 5.31 %,9.78 %, and 9.93 % compared to DDQN, PFC, and FLC, respectively under another validation cycle, Amman cycle. Moreover, the training results show that the proposed algorithm reduces training time by 38% compared to the DDQN approach. The proposed deep REINFORCE-based EMS shows superiority not only in terms of fuel economy, but also in terms of dealing with epistemic uncertainty. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF