Back to Search
Start Over
In Validations We Trust? The Impact of Imperfect Human Annotations as a Gold Standard on the Quality of Validation of Automated Content Analysis.
- Source :
-
Political Communication . Jul/Aug2020, Vol. 37 Issue 4, p550-572. 23p. 3 Charts, 3 Graphs. - Publication Year :
- 2020
-
Abstract
- Political communication has become one of the central arenas of innovation in the application of automated analysis approaches to ever-growing quantities of digitized texts. However, although researchers routinely and conveniently resort to certain forms of human coding to validate the results derived from automated procedures, in practice the actual "quality assurance" of such a "gold standard" often goes unchecked. Contemporary practices of validation via manual annotations are far from being acknowledged as best practices in the literature, and the reporting and interpretation of validation procedures differ greatly. We systematically assess the connection between the quality of human judgment in manual annotations and the relative performance evaluations of automated procedures against true standards by relying on large-scale Monte Carlo simulations. The results from the simulations confirm that there is a substantially greater risk of a researcher reaching an incorrect conclusion regarding the performance of automated procedures when the quality of manual annotations used for validation is not properly ensured. Our contribution should therefore be regarded as a call for the systematic application of high-quality manual validation materials in any political communication study, drawing on automated text analysis procedures. [ABSTRACT FROM AUTHOR]
Details
- Language :
- English
- ISSN :
- 10584609
- Volume :
- 37
- Issue :
- 4
- Database :
- Academic Search Index
- Journal :
- Political Communication
- Publication Type :
- Academic Journal
- Accession number :
- 145282202
- Full Text :
- https://doi.org/10.1080/10584609.2020.1723752