Back to Search Start Over

CACV-tree

Authors :
Jingwei Wang
Wenxin Hu
Wen Wu
Source :
Proceedings of the 2019 International Conference on Big Data Engineering.
Publication Year :
2019
Publisher :
ACM, 2019.

Abstract

Sentence similarity modeling plays an important role in Natural Language Processing (NLP) tasks, and thus has received much attention. In recent years, due to the success of word embedding, the neural network method has achieved sentence embedding, obtaining attractive performance. Nevertheless, most of them focused on learning semantic information and modeling it as a continuous vector, while the syntactic information of sentences has not been fully exploited. On the other hand, prior works have shown the benefits of structured trees that include syntactic information, while few methods in this branch utilized the advantages of sentence compression. This paper makes the first attempt to absorb their advantages by merging these techniques in a unified structure, dubbed as CACV-tree (Compression Attention Constituency Vector-tree). The experimental results, based on 14 widely used datasets, demonstrate that our model is effective and competitive, compared against state-of-the-art models.

Details

Database :
OpenAIRE
Journal :
Proceedings of the 2019 International Conference on Big Data Engineering
Accession number :
edsair.doi...........b9888c89cdd95129ddac88d237fad5d4
Full Text :
https://doi.org/10.1145/3341620.3341627