1. Transformer-Based Neural Network for Answer Selection in Question Answering
- Author
-
Taihua Shao, Yupu Guo, Honghui Chen, and Zepeng Hao
- Subjects
Answer selection ,deep learning ,question answering ,Transformer ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
Answer selection is a crucial subtask in the question answering (QA) system. Conventional avenues for this task mainly concentrate on developing linguistic tools that are limited in both performance and practicability. Answer selection approaches based on deep learning have been well investigated with the tremendous success of deep learning in natural language processing. However, the traditional neural networks employed in existing answer selection models, i.e., recursive neural network or convolutional neural network, typically suffer from obtaining the global text information due to their operating mechanisms. The recent Transformer neural network is considered to be good at extracting the global information by employing only self-attention mechanism. Thus, in this paper, we design a Transformer-based neural network for answer selection, where we deploy a bidirectional long short-term memory (BiLSTM) behind the Transformer to acquire both global information and sequential features in the question or answer sentence. Different from the original Transformer, our Transformer-based network focuses on sentence embedding rather than the seq2seq task. In addition, we employ a BiLSTM rather than utilizing the position encoding to incorporate sequential features as the universal Transformer does. Furthermore, we apply three aggregated strategies to generate sentence embeddings for question and answer, i.e., the weighted mean pooling, the max pooling, and the attentive pooling, leading to three corresponding Transformer-based models, i.e., QA-TFWP, QA-TFMP, and QA-TFAP, respectively. Finally, we evaluate our proposals on a popular QA dataset WikiQA. The experimental results demonstrate that our proposed Transformer-based answer selection models can produce a better performance compared with several competitive baselines. In detail, our best model outperforms the state-of-the-art baseline by up to 2.37%, 2.83%, and 3.79% in terms of MAP, MRR, and accuracy, respectively.
- Published
- 2019
- Full Text
- View/download PDF