1. Interrater sleep stage scoring reliability between manual scoring from two European sleep centers and automatic scoring performed by the artificial intelligence–based Stanford-STAGES algorithm
- Author
-
Birgit Högl, Anna Heidbreder, Henry Völzke, Ambra Stefani, Heinz Hackner, Beate Stubbe, Klaus Berger, Matteo Cesari, Andras Szentkiralyi, Abubaker Ibrahim, and Thomas Penzel
- Subjects
Observer Variation ,Pulmonary and Respiratory Medicine ,medicine.medical_specialty ,genetic structures ,business.industry ,Computerized analysis ,Reproducibility of Results ,Electroencephalography ,Scientific Investigations ,Inter-rater reliability ,Physical medicine and rehabilitation ,Neurology ,Artificial Intelligence ,Humans ,Medicine ,Deep neural networks ,Sleep Stages ,Neurology (clinical) ,Sleep (system call) ,Stage (cooking) ,Sleep ,business ,Algorithms ,Reliability (statistics) - Abstract
STUDY OBJECTIVES: The objective of this study was to evaluate interrater reliability between manual sleep stage scoring performed in 2 European sleep centers and automatic sleep stage scoring performed by the previously validated artificial intelligence–based Stanford-STAGES algorithm. METHODS: Full night polysomnographies of 1,066 participants were included. Sleep stages were manually scored in Berlin and Innsbruck sleep centers and automatically scored with the Stanford-STAGES algorithm. For each participant, we compared (1) Innsbruck to Berlin scorings (INN vs BER); (2) Innsbruck to automatic scorings (INN vs AUTO); (3) Berlin to automatic scorings (BER vs AUTO); (4) epochs where scorers from Innsbruck and Berlin had consensus to automatic scoring (CONS vs AUTO); and (5) both Innsbruck and Berlin manual scorings (MAN) to the automatic ones (MAN vs AUTO). Interrater reliability was evaluated with several measures, including overall and sleep stage-specific Cohen’s κ. RESULTS: Overall agreement across participants was substantial for INN vs BER (κ = 0.66 ± 0.13), INN vs AUTO (κ = 0.68 ± 0.14), CONS vs AUTO (κ = 0.73 ± 0.14), and MAN vs AUTO (κ = 0.61 ± 0.14), and moderate for BER vs AUTO (κ = 0.55 ± 0.15). Human scorers had the highest disagreement for N1 sleep (κ(N1) = 0.40 ± 0.16 for INN vs BER). Automatic scoring had lowest agreement with manual scorings for N1 and N3 sleep (κ(N1) = 0.25 ± 0.14 and κ(N3) = 0.42 ± 0.32 for MAN vs AUTO). CONCLUSIONS: Interrater reliability for sleep stage scoring between human scorers was in line with previous findings, and the algorithm achieved an overall substantial agreement with manual scoring. In this cohort, the Stanford-STAGES algorithm showed similar performances to the ones achieved in the original study, suggesting that it is generalizable to new cohorts. Before its integration in clinical practice, future independent studies should further evaluate it in other cohorts. CITATION: Cesari M, Stefani A, Penzel T, et al. Interrater sleep stage scoring reliability between manual scoring from two European sleep centers and automatic scoring performed by the artificial intelligence–based Stanford-STAGES algorithm. J Clin Sleep Med. 2021;17(6):1237–1247.
- Published
- 2021