1. A Generic Markov Decision Process Model and Reinforcement Learning Method for Scheduling Agile Earth Observation Satellites.
- Author
-
He, Yongming, Xing, Lining, Chen, Yingwu, Pedrycz, Witold, Wang, Ling, and Wu, Guohua
- Subjects
REINFORCEMENT learning ,ARTIFICIAL satellites ,MARKOV processes ,TELECOMMUNICATION satellites ,REWARD (Psychology) ,PRODUCTION scheduling - Abstract
We investigate a general solution based on reinforcement learning for the agile satellite scheduling problem. The core idea of this method is to determine a value function for evaluating the long-term benefit under a certain state by training from experiences, and then apply this value function to guide decisions in unknown situations. First, the process of agile satellite scheduling is modeled as a finite Markov decision process with continuous state space and discrete action space. Two subproblems of the agile Earth observation satellite scheduling problem, i.e., the sequencing problem and the timing problem are solved by the part of the agent and the environment in the model, respectively. A satisfactory solution of the timing problem can be quickly produced by a constructive heuristic algorithm. The objective function of this problem is to maximize the total reward of the entire scheduling process. Based on the above design, we demonstrate that Q-network has advantages in fitting the long-term benefit of such problems. After that, we train the Q-network by Q-learning. The experimental results show that the trained Q-network performs efficiently to cope with unknown data, and can generate high total profit in a short time. The method has good scalability and can be applied to different types of satellite scheduling problems by customizing only the constraints checking process and reward signals. [ABSTRACT FROM AUTHOR]
- Published
- 2022
- Full Text
- View/download PDF