Back to Search Start Over

Multistructure Contrastive Learning for Pretraining Event Representation

Authors :
Jianming Zheng
Fei Cai
Jun Liu
Yanxiang Ling
Honghui Chen
Source :
IEEE transactions on neural networks and learning systems.
Publication Year :
2022

Abstract

Event representation aims to transform individual events from a narrative event chain into a set of low-dimensional vectors to help support a series of downstream applications, e.g., similarity differentiation and missing event prediction. Traditional event representation models tend to focus on single modeling perspectives and thus are incapable of capturing physically disconnected yet semantically connected event segments. We, therefore, propose a heterogeneous event graph model (HeterEvent) to explicitly represent such event segments. Furthermore, another challenge in traditional event representation models is inherited from the datasets themselves. Data sparsity and insufficient labeled data are commonly encountered in event chains, easily leading to overfitting and undertraining. Therefore, we extend HeterEvent with a multistructure contrastive learning framework (MulCL) to alleviate the training risks from two structural perspectives. From the sequential perspective, a sequential-view contrastive learning component (SeqCL) is designed to facilitate the acquisition of sequential characteristics. From the graph perspective, a graph-view contrastive learning component (GraCL) is proposed to enhance the robustness of graph training by comparing different corrupted graphs. Experimental results confirm that our proposed MulCL

Details

ISSN :
21622388
Database :
OpenAIRE
Journal :
IEEE transactions on neural networks and learning systems
Accession number :
edsair.doi.dedup.....ac68c25b70fc86b4bb0a2bd72fd322cf