Back to Search Start Over

A Survey on Knowledge-Enhanced Pre-trained Language Models

Authors :
Zhen, Chaoqi
Shang, Yanlei
Liu, Xiangyu
Li, Yifei
Chen, Yong
Zhang, Dell
Publication Year :
2022

Abstract

Natural Language Processing (NLP) has been revolutionized by the use of Pre-trained Language Models (PLMs) such as BERT. Despite setting new records in nearly every NLP task, PLMs still face a number of challenges including poor interpretability, weak reasoning capability, and the need for a lot of expensive annotated data when applied to downstream tasks. By integrating external knowledge into PLMs, \textit{\underline{K}nowledge-\underline{E}nhanced \underline{P}re-trained \underline{L}anguage \underline{M}odels} (KEPLMs) have the potential to overcome the above-mentioned limitations. In this paper, we examine KEPLMs systematically through a series of studies. Specifically, we outline the common types and different formats of knowledge to be integrated into KEPLMs, detail the existing methods for building and evaluating KEPLMS, present the applications of KEPLMs in downstream tasks, and discuss the future research directions. Researchers will benefit from this survey by gaining a quick and comprehensive overview of the latest developments in this field.<br />Comment: 19 pages, 12 figures, 192 references

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.13428
Document Type :
Working Paper