Back to Search Start Over

Learning Chinese Word Embeddings With Words and Subcharacter N-Grams

Authors :
Ruizhi Kang
Hongjun Zhang
Wenning Hao
Kai Cheng
Guanglu Zhang
Source :
IEEE Access, Vol 7, Pp 42987-42992 (2019)
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Co-occurrence information between words is the basis of training word embeddings; besides, Chinese characters are composed of subcharacters, words made up by the same characters or subcharacters usually have similar semantics, but this internal substructure information is usually neglected in popular models. In this paper, we propose a novel method for learning Chinese word embeddings, which takes full use of external co-occurrence context information and internal substructure information. We represent each word as a bag of subcharacter n-grams, and our model learns the vector representation corresponding to the word and its subcharacter n-grams. The final word embeddings are represented as the sum of these two kinds of vector representation, which makes the learned word embeddings can take into account both the internal structure information and external co-occurrence information possible. The experiments show that our method outperforms state-of-the-art performance on benchmarks.

Details

Language :
English
ISSN :
21693536
Volume :
7
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.fcc86b0dd9504ac3b9dbb388954ac611
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2019.2908014