Back to Search Start Over

Analyzing Language Learned by an Active Question Answering Agent

Authors :
Buck, Christian
Bulian, Jannis
Ciaramita, Massimiliano
Gajewski, Wojciech
Gesmundo, Andrea
Houlsby, Neil
Wang, Wei
Publication Year :
2018

Abstract

We analyze the language learned by an agent trained with reinforcement learning as a component of the ActiveQA system [Buck et al., 2017]. In ActiveQA, question answering is framed as a reinforcement learning task in which an agent sits between the user and a black box question-answering system. The agent learns to reformulate the user's questions to elicit the optimal answers. It probes the system with many versions of a question that are generated via a sequence-to-sequence question reformulation model, then aggregates the returned evidence to find the best answer. This process is an instance of \emph{machine-machine} communication. The question reformulation model must adapt its language to increase the quality of the answers returned, matching the language of the question answering system. We find that the agent does not learn transformations that align with semantic intuitions but discovers through learning classical information retrieval techniques such as tf-idf re-weighting and stemming.<br />Comment: Emergent Communication Workshop, NIPS 2017

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1801.07537
Document Type :
Working Paper