Back to Search Start Over

Disentangling Sampling and Labeling Bias for Learning in Large-Output Spaces

Authors :
Rawat, Ankit Singh
Menon, Aditya Krishna
Jitkrittum, Wittawat
Jayasumana, Sadeep
Yu, Felix X.
Reddi, Sashank
Kumar, Sanjiv
Publication Year :
2021

Abstract

Negative sampling schemes enable efficient training given a large number of classes, by offering a means to approximate a computationally expensive loss function that takes all labels into account. In this paper, we present a new connection between these schemes and loss modification techniques for countering label imbalance. We show that different negative sampling schemes implicitly trade-off performance on dominant versus rare labels. Further, we provide a unified means to explicitly tackle both sampling bias, arising from working with a subset of all labels, and labeling bias, which is inherent to the data due to label imbalance. We empirically verify our findings on long-tail classification and retrieval benchmarks.<br />Comment: To appear in ICML 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2105.05736
Document Type :
Working Paper