Back to Search Start Over

WRENCH: A Comprehensive Benchmark for Weak Supervision

Authors :
Zhang, Jieyu
Yu, Yue
Li, Yinghao
Wang, Yujing
Yang, Yaming
Yang, Mao
Ratner, Alexander
Publication Year :
2021

Abstract

Recent Weak Supervision (WS) approaches have had widespread success in easing the bottleneck of labeling training data for machine learning by synthesizing labels from multiple potentially noisy supervision sources. However, proper measurement and analysis of these approaches remain a challenge. First, datasets used in existing works are often private and/or custom, limiting standardization. Second, WS datasets with the same name and base data often vary in terms of the labels and weak supervision sources used, a significant "hidden" source of evaluation variance. Finally, WS studies often diverge in terms of the evaluation protocol and ablations used. To address these problems, we introduce a benchmark platform, WRENCH, for thorough and standardized evaluation of WS approaches. It consists of 22 varied real-world datasets for classification and sequence tagging; a range of real, synthetic, and procedurally-generated weak supervision sources; and a modular, extensible framework for WS evaluation, including implementations for popular WS methods. We use WRENCH to conduct extensive comparisons over more than 120 method variants to demonstrate its efficacy as a benchmark platform. The code is available at https://github.com/JieyuZ2/wrench.<br />Comment: NeurIPS 2021 Datasets and Benchmarks Track

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2109.11377
Document Type :
Working Paper