1. K-LM: Knowledge Augmenting in Language Models Within the Scholarly Domain
- Author
-
Vivek Kumar, Diego Reforgiato Recupero, Rim Helaoui, and Daniele Riboni
- Subjects
Deep learning ,machine learning ,knowledge graphs ,knowledge graphs embeddings ,GPT-2 ,BERT ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
The use of superior algorithms and complex architectures in language models have successfully imparted human-like abilities to machines for specific tasks. But two significant constraints, the available training data size and the understanding of domain-specific context, hamper the pre-trained language models from optimal and reliable performance. A potential solution to tackle these limitations is to equip the language models with domain knowledge. While the commonly adopted techniques use Knowledge Graphs Embeddings (KGEs) to inject domain knowledge, we provide a Knowledge Language Model (K-LM) to use the Resource Description Framework (RDF) triples directly, extracted from world knowledge bases. The proposed model works in conjunction with Generative Pretrained Transformer (GPT-2) and Bidirectional Encoder Representations from Transformers (BERT) and uses a well-defined pipeline to select, categorize, and filter the RDF triples. In addition, we introduce heuristic methods to inject domain-specific knowledge in K-LM, leveraging knowledge graphs (KGs). We tested our approaches on the classification task within the scholarly domain using two KGs, and our results show that our proposed language model has significantly outperformed the baselines and BERT for each KG. Our experimental findings also help us conclude the importance of relevance of KG used over the quantity of injected RDF triples. Also, each of our proposed methods for injecting the RDF triples has increased the overall model’s accuracy, demonstrating that K-LM is a potential choice for domain adaptation to solve knowledge-driven problems.
- Published
- 2022
- Full Text
- View/download PDF