Back to Search Start Over

Improving Generalizability in Implicitly Abusive Language Detection with Concept Activation Vectors

Authors :
Nejadgholi, Isar
Fraser, Kathleen C.
Kiritchenko, Svetlana
Nejadgholi, Isar
Fraser, Kathleen C.
Kiritchenko, Svetlana
Publication Year :
2022

Abstract

Robustness of machine learning models on ever-changing real-world data is critical, especially for applications affecting human well-being such as content moderation. New kinds of abusive language continually emerge in online discussions in response to current events (e.g., COVID-19), and the deployed abuse detection systems should be updated regularly to remain accurate. In this paper, we show that general abusive language classifiers tend to be fairly reliable in detecting out-of-domain explicitly abusive utterances but fail to detect new types of more subtle, implicit abuse. Next, we propose an interpretability technique, based on the Testing Concept Activation Vector (TCAV) method from computer vision, to quantify the sensitivity of a trained model to the human-defined concepts of explicit and implicit abusive language, and use that to explain the generalizability of the model on new data, in this case, COVID-related anti-Asian hate speech. Extending this technique, we introduce a novel metric, Degree of Explicitness, for a single instance and show that the new metric is beneficial in suggesting out-of-domain unlabeled examples to effectively enrich the training data with informative, implicitly abusive texts.<br />Comment: accepted to be published at ACL2022

Details

Database :
OAIster
Publication Type :
Electronic Resource
Accession number :
edsoai.on1333762238
Document Type :
Electronic Resource