Back to Search Start Over

Improving Retrieval Augmented Open-Domain Question-Answering with Vectorized Contexts

Authors :
Chen, Zhuo
Wang, Xinyu
Jiang, Yong
Xie, Pengjun
Huang, Fei
Tu, Kewei
Chen, Zhuo
Wang, Xinyu
Jiang, Yong
Xie, Pengjun
Huang, Fei
Tu, Kewei
Publication Year :
2024

Abstract

In the era of large language models, applying techniques such as Retrieval Augmented Generation can better address Open-Domain Question-Answering problems. Due to constraints including model sizes and computing resources, the length of context is often limited, and it becomes challenging to empower the model to cover overlong contexts while answering questions from open domains. This paper proposes a general and convenient method to covering longer contexts in Open-Domain Question-Answering tasks. It leverages a small encoder language model that effectively encodes contexts, and the encoding applies cross-attention with origin inputs. With our method, the origin language models can cover several times longer contexts while keeping the computing requirements close to the baseline. Our experiments demonstrate that after fine-tuning, there is improved performance across two held-in datasets, four held-out datasets, and also in two In Context Learning settings.

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1438542765
Document Type :
Electronic Resource