1. A novel feature attention mechanism for improving the accuracy and robustness of runoff forecasting.
- Author
-
Wang, Hao, Qin, Hui, Liu, Guanjun, Liu, Shuai, Qu, Yuhua, Wang, Kang, and Zhou, Jianzhong
- Subjects
- *
RUNOFF , *DEEP learning , *MACHINE learning , *WATERSHEDS , *LEARNING ability - Abstract
• A novel feature processing paradigm and the model based on it are proposed. • The rationality of the method is corroborated in hydrological feature discovery and time-series dependence extraction. • The proposed method shows its excellent performance in term of runoff prediction. • The performance of all models is validated on two proposed constructing noisy datasets. The interaction between hydrological factors is complex and the correlation effects cannot be quantitatively explained from a mechanistic perspective. The extraction of effective features from several hydrological and meteorological data and quantifying their correlation effects to make runoff prediction more accurate and stable is an urgent problem to be solved. In this study, we introduce a structural paradigm for hydrological time series to handle the entire feature space composed of some feature sequences based on deep learning methods, namely, the feature attention mechanism. This method transforms the problem of feature-target association into multiple parallel binary classification problems and assigns attention units to each specific feature in the network. The attention distribution was adjusted by updating the neural network parameters in a supervised manner to generate feature weightings. In addition, two extensions were proposed based on the initial network structure. The three methods were used in a practical study of the upper Yangtze River Basin using three evaluation metrics compared with five benchmark methods. The mean metric values were improved by up to 9.43, 10.05, and 2.65%, respectively. Meanwhile, with the characteristics of hydrological data, the rationality of the method is corroborated in hydrological feature discovery and time-series dependence extraction. Moreover, we tested the performance of the model on two constructed noise datasets to demonstrate its robustness. The results of all the above experiments show that the attention module significantly improves the learning and generalization ability, enhances noise resistance, and strengthens the robustness of the model compared with the traditional method. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF