1. Assessing quality of selection procedures: Lower bound of false positive rate as a function of inter‐rater reliability.
- Author
-
Bartoš, František and Martinková, Patrícia
- Subjects
- *
FALSE positive error , *PSYCHOLOGICAL tests , *ERROR rates , *CLASSIFICATION , *PROBABILITY theory - Abstract
Inter‐rater reliability (IRR) is one of the commonly used tools for assessing the quality of ratings from multiple raters. However, applicant selection procedures based on ratings from multiple raters usually result in a binary outcome; the applicant is either selected or not. This final outcome is not considered in IRR, which instead focuses on the ratings of the individual subjects or objects. We outline the connection between the ratings' measurement model (used for IRR) and a binary classification framework. We develop a simple way of approximating the probability of correctly selecting the best applicants which allows us to compute error probabilities of the selection procedure (i.e., false positive and false negative rate) or their lower bounds. We draw connections between the IRR and the binary classification metrics, showing that binary classification metrics depend solely on the IRR coefficient and proportion of selected applicants. We assess the performance of the approximation in a simulation study and apply it in an example comparing the reliability of multiple grant peer review selection procedures. We also discuss other possible uses of the explored connections in other contexts, such as educational testing, psychological assessment, and health‐related measurement, and implement the computations in the R package IRR2FPR. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF