Back to Search
Start Over
Prompting large language model with context and pre-answer for knowledge-based VQA.
- Source :
-
Pattern Recognition . Jul2024, Vol. 151, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- Existing studies apply Large Language Model (LLM) to knowledge-based Visual Question Answering (VQA) with encouraging results. Due to the insufficient input information, the previous methods still have shortcomings in constructing the prompt for LLM, and cannot fully activate the capacity of LLM. In addition, previous works adopt GPT-3 for inference, which has expensive costs. In this paper, we propose PCPA: a framework that P rompts LLM with C ontext and P re- A nswer for VQA. Specifically, we adopt a vanilla VQA model to generate in-context examples and candidate answers, and add a pre-answer selection layer to generate pre-answers. We integrate in-context examples and pre-answers into the prompt to inspire the LLM. In addition, we choose LLaMA instead of GPT-3, which is an open and free model. We build a small dataset to fine-tune the LLM. Compared to existing baselines, the PCPA improves accuracy by more than 2.1 and 1.5 on OK-VQA and A-OKVQA, respectively. • We propose a novel framework that prompts LLM for knowledge-based VQA. • We add dynamic routing to vanilla VQA model to further inspire the LLM. • we add a pre-answer selection layer to generate more suitable pre-answers. • We build a small dataset for fine-tuning LLM. [ABSTRACT FROM AUTHOR]
- Subjects :
- *LANGUAGE models
*GENERATIVE pre-trained transformers
Subjects
Details
- Language :
- English
- ISSN :
- 00313203
- Volume :
- 151
- Database :
- Academic Search Index
- Journal :
- Pattern Recognition
- Publication Type :
- Academic Journal
- Accession number :
- 176406954
- Full Text :
- https://doi.org/10.1016/j.patcog.2024.110399