Back to Search Start Over

River runoff causal discovery with deep reinforcement learning.

Authors :
Ji, Junzhong
Wang, Ting
Liu, Jinduo
Wang, Muhua
Tang, Wei
Source :
Applied Intelligence; Feb2024, Vol. 54 Issue 4, p3547-3565, 19p
Publication Year :
2024

Abstract

Causal discovery from river runoff data aids flood prevention and mitigation strategies, garnering attention in climate and earth science. However, most climate causal discovery methods rely on conditional independence approaches, overlooking the non-stationary characteristics of river runoff data and leading to poor performance. In this paper, we propose a river runoff causal discovery method based on deep reinforcement learning, called RCD-DRL, to effectively learn causal relationships from non-stationary river runoff time series data. The proposed method utilizes an actor-critic framework, which consists of three main modules: an actor module, a critic module, and a reward module. In detail, RCD-DRL first employs the actor module within the encoder-decoder architecture to learn latent features from raw river runoff data, enabling the model to quickly adapt to non-stationary data distributions and generating a causality matrix at different stations. Subsequently, the critic network with two fully connected layers is designed to estimate the value of the current encoded features. Finally, the reward module, based on the Bayesian information criterion (BIC), is used to calculate the reward corresponding to the currently generated causal matrix. Experimental results obtained on both synthetic and real datasets demonstrate the superior performance of the proposed method over the state-of-the-art methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
0924669X
Volume :
54
Issue :
4
Database :
Complementary Index
Journal :
Applied Intelligence
Publication Type :
Academic Journal
Accession number :
176405960
Full Text :
https://doi.org/10.1007/s10489-024-05348-7