Back to Search Start Over

Chinese Word Segmentation Based on Self‐Learning Model and Geological Knowledge for the Geoscience Domain.

Authors :
Li, Wenjia
Ma, Kai
Qiu, Qinjun
Wu, Liang
Xie, Zhong
Li, Sanfeng
Chen, Siqiong
Source :
Earth & Space Science. Jun2021, Vol. 8 Issue 6, p1-15. 15p.
Publication Year :
2021

Abstract

Chinese word segmentation (CWS) is the foundational work of geological report text mining and has an important influence on various tasks, such as named entity recognition and relation extraction. In recent years, the accuracy of the domain‐general CWS model has been limited by the domain and large scale of the training corpus, especially data on Chinese geological texts. Training these CWS models also requires much manually annotated data, which takes a large amount of time and effort. When applying these existing models/methods directly to the geoscience domain, the segmentation accuracy and performance will drop dramatically. To address this problem, we pretrain the Bidirectional Encoder Representations from Transformer (BERT), which can leverage unlabeled domain‐specific knowledge, on unlabeled Chinese geological text and then input them into a Bidirectional long short‐term memory and Conditional random field (BiLSTM‐CRF) model for extracting text features. Finally, the predicted tags are decoded by the CRF. The experimental results show that the F1 score of the proposed model reaches 96.2% on the constructed test set of geological texts. Additionally, experiments illustrate that our proposed model achieves comparable performance to that of other state‐of‐the‐art models, and the proposed cyclic self‐learning strategy can be further extended to other domains. Plain Language Summary: The supervised word segmentation model commonly lacks specialized knowledge in the training data set and has poor adaptability to the domain. This study proposes a sequential annotation model for geoscience text, which automatically construct domain training‐corpus and realize word segmentation taking into account the long‐distance dependence of sentences. We hope that our approach will serve as an alternative method that deserves further study. Key Points: BERT is used to capture the abundant word level features, grammatical structure features and semantic features in sentencesThe self‐learning strategy assisted by domain knowledge can automatically construct the domain training corpus without manual interventionA set of experiments to verify the effectiveness of the proposed method on an available manually constructed hybrid data set [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
23335084
Volume :
8
Issue :
6
Database :
Academic Search Index
Journal :
Earth & Space Science
Publication Type :
Academic Journal
Accession number :
151135059
Full Text :
https://doi.org/10.1029/2021EA001673