Back to Search Start Over

Enhancing Low-Resource Relation Representations through Multi-View Decoupling

Authors :
Fan, Chenghao
Wei, Wei
Qu, Xiaoye
Lu, Zhenyi
Xie, Wenfeng
Cheng, Yu
Chen, Dangyang
Publication Year :
2023

Abstract

Recently, prompt-tuning with pre-trained language models (PLMs) has demonstrated the significantly enhancing ability of relation extraction (RE) tasks. However, in low-resource scenarios, where the available training data is scarce, previous prompt-based methods may still perform poorly for prompt-based representation learning due to a superficial understanding of the relation. To this end, we highlight the importance of learning high-quality relation representation in low-resource scenarios for RE, and propose a novel prompt-based relation representation method, named MVRE (\underline{M}ulti-\underline{V}iew \underline{R}elation \underline{E}xtraction), to better leverage the capacity of PLMs to improve the performance of RE within the low-resource prompt-tuning paradigm. Specifically, MVRE decouples each relation into different perspectives to encompass multi-view relation representations for maximizing the likelihood during relation inference. Furthermore, we also design a Global-Local loss and a Dynamic-Initialization method for better alignment of the multi-view relation-representing virtual words, containing the semantics of relation labels during the optimization learning process and initialization. Extensive experiments on three benchmark datasets show that our method can achieve state-of-the-art in low-resource settings.<br />Comment: Accepted to AAAI 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2312.17267
Document Type :
Working Paper
Full Text :
https://doi.org/10.1609/aaai.v38i16.29752