1. Reinforcement Learning Based Efficiency Optimization Scheme for the DAB DC–DC Converter With Triple-Phase-Shift Modulation
- Author
-
Weihao Hu, Jian Xiao, Qi Huang, Zhe Chen, Chen Zhangyong, Frede Blaabjerg, and Yuanhong Tang
- Subjects
Reinforcement Learning (RL) ,Maximum power principle ,Computer science ,DAB DC-DC converter ,020208 electrical & electronic engineering ,02 engineering and technology ,Inductor ,Power (physics) ,Control and Systems Engineering ,Modulation ,Control theory ,Q-learning ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,Electrical and Electronic Engineering ,power efficiency ,optimization ,Phase modulation ,Electrical efficiency ,Voltage - Abstract
Aim to improve the power efficiency of the dual-active-bridge (DAB) dc–dc converter, an efficiency optimization scheme with triple-phase-shift (TPS) modulation using reinforcement learning (RL) is proposed in this article. More specifically, the Q-learning algorithm, as a typical algorithm of the RL, is applied to train an agent offline to obtain an optimized modulation strategy, and then the trained agent provides control decisions online in a real-time manner for the DAB dc–dc converter according to the current operating environment. The main objective is to obtain the optimal phase-shift angles for the DAB dc–dc converter, which can achieve the maximum power efficiency by reducing the power losses. Moreover, all possible operation modes of the TPS modulation are considered during the offline training process of the Q-learning algorithm. Thus, the cumbersome process for selecting the optimal operation mode in the conventional schemes can be circumvented successfully. Based on these merits, the proposed efficiency optimization scheme using the RL can realize the excellent performances for the whole load conditions and voltage conversion ratios. Finally, a 1.2-KW prototyped is built, and the simulation and the experimental results demonstrate that the power efficiency can be improved by using the optimization scheme based on the RL.
- Published
- 2021