Back to Search
Start Over
Don't Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities
- Source :
- COLING, The 28th International Conference on Computational Linguistics (COLING 2020)
- Publication Year :
- 2020
- Publisher :
- arXiv, 2020.
-
Abstract
- In this paper, we introduce a new annotated dataset which is aimed at supporting the development of NLP models to identify and categorize language that is patronizing or condescending towards vulnerable communities (e.g. refugees, homeless people, poor families). While the prevalence of such language in the general media has long been shown to have harmful effects, it differs from other types of harmful language, in that it is generally used unconsciously and with good intentions. We furthermore believe that the often subtle nature of patronizing and condescending language (PCL) presents an interesting technical challenge for the NLP community. Our analysis of the proposed dataset shows that identifying PCL is hard for standard NLP models, with language models such as BERT achieving the best results.
- Subjects :
- FOS: Computer and information sciences
Computer Science - Computation and Language
business.industry
Computer science
Refugee
05 social sciences
Internet privacy
050301 education
050801 communication & media studies
0508 media and communications
Categorization
Language model
business
0503 education
Computation and Language (cs.CL)
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- COLING, The 28th International Conference on Computational Linguistics (COLING 2020)
- Accession number :
- edsair.doi.dedup.....742a55109de540bdd9bba446c259c5f0
- Full Text :
- https://doi.org/10.48550/arxiv.2011.08320