Back to Search
Start Over
Reading Between the Demographic Lines: Resolving Sources of Bias in Toxicity Classifiers
- Publication Year :
- 2020
-
Abstract
- The censorship of toxic comments is often left to the judgment of imperfect models. Perspective API, a creation of Google technology incubator Jigsaw, is perhaps the most widely used toxicity classifier in industry; the model is employed by several online communities including The New York Times to identify and filter out toxic comments with the goal of preserving online safety. Unfortunately, Google's model tends to unfairly assign higher toxicity scores to comments containing words referring to the identities of commonly targeted groups (e.g., "woman,'' "gay,'' etc.) because these identities are frequently referenced in a disrespectful manner in the training data. As a result, comments generated by marginalized groups referencing their identities are often mistakenly censored. It is important to be cognizant of this unintended bias and strive to mitigate its effects. To address this issue, we have constructed several toxicity classifiers with the intention of reducing unintended bias while maintaining strong classification performance.<br />Comment: 8 pages, 13 figures
Details
- Database :
- arXiv
- Publication Type :
- Report
- Accession number :
- edsarx.2006.16402
- Document Type :
- Working Paper