Back to Search Start Over

Learning to Contextually Aggregate Multi-Source Supervision for Sequence Labeling

Authors :
Lan, Ouyu
Huang, Xiao
Lin, Bill Yuchen
Jiang, He
Liu, Liyuan
Ren, Xiang
Publication Year :
2019

Abstract

Sequence labeling is a fundamental framework for various natural language processing problems. Its performance is largely influenced by the annotation quality and quantity in supervised learning scenarios, and obtaining ground truth labels is often costly. In many cases, ground truth labels do not exist, but noisy annotations or annotations from different domains are accessible. In this paper, we propose a novel framework Consensus Network (ConNet) that can be trained on annotations from multiple sources (e.g., crowd annotation, cross-domain data...). It learns individual representation for every source and dynamically aggregates source-specific knowledge by a context-aware attention module. Finally, it leads to a model reflecting the agreement (consensus) among multiple sources. We evaluate the proposed framework in two practical settings of multi-source learning: learning with crowd annotations and unsupervised cross-domain model adaptation. Extensive experimental results show that our model achieves significant improvements over existing methods in both settings. We also demonstrate that the method can apply to various tasks and cope with different encoders.<br />Comment: Accepted to the ACL 2020, code: https://github.com/INK-USC/ConNet

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1910.04289
Document Type :
Working Paper