Back to Search Start Over

Looking for a reference for large datasets: relative reliability of visual and automatic sleep scoring

Authors :
Mathieu Jaspar
Jacques Prado
Vincenzo Muto
Christian Berthomier
Jérémie Mattout
Sarah Laxhmi Chellappa
Christophe Phillips
Christelle Meyer
Marie Brandewinder
Jonathan Devillers
Eric Salmon
Pierre Maquet
Giulia Gaggioni
Pierre Berthomier
Gilles Vandewalle
O. Benoit
Christina Schmidt
Publication Year :
2019
Publisher :
Cold Spring Harbor Laboratory, 2019.

Abstract

Study ObjectivesNew challenges in sleep science require to describe fine grain phenomena or to deal with large datasets. Beside the human resource challenge of scoring huge datasets, the inter- and intra-expert variability may also reduce the sensitivity of such studies. Searching for a way to disentangle the variability induced by the scoring method from the actual variability in the data, visual and automatic sleep scorings of healthy individuals were examined.MethodsA first dataset (DS1, 4 recordings) scored by 6 experts plus an autoscoring algorithm was used to characterize inter-scoring variability. A second dataset (DS2, 88 recordings) scored a few weeks later was used to investigate intra-expert variability. Percentage agreements and Conger’s kappa were derived from epoch-by-epoch comparisons on pairwise, consensus and majority scorings.ResultsOn DS1 the number of epochs of agreement decreased when the number of expert increased, in both majority and consensus scoring, where agreement ranged from 86% (pairwise) to 69% (all experts). Adding autoscoring to visual scorings changed the kappa value from 0.81 to 0.79. Agreement between expert consensus and autoscoring was 93%. On DS2 intra-expert variability was evidenced by the kappa systematic decrease between autoscoring and each single expert between datasets (0.75 to 0.70).ConclusionsVisual scoring induces inter- and intra-expert variability, which is difficult to address especially in big data studies. When proven to be reliable and if perfectly reproducible, autoscoring methods can cope with intra-scorer variability making them a sensible option when dealing with large datasets.Statement of SignificanceWe confirmed and extended previous findings highlighting the intra- and inter-expert variability in visual sleep scoring. On large datasets those variability issues cannot be completely addressed by neither practical nor statistical solutions such as group training, majority or consensus scoring.When an automated scoring method can be proven to be as reasonably imperfect as visual scoring but perfectly reproducible, it can serve as a reliable scoring reference for sleep studies.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....0fcaf90080c16e30f5e768b8816cb41f
Full Text :
https://doi.org/10.1101/576090