Back to Search Start Over

Multi-Modal Deep Learning for Assessing Surgeon Technical Skill.

Authors :
Kasa, Kevin
Burns, David
Goldenberg, Mitchell G.
Selim, Omar
Whyne, Cari
Hardisty, Michael
Source :
Sensors (14248220); Oct2022, Vol. 22 Issue 19, p7328-7328, 16p
Publication Year :
2022

Abstract

This paper introduces a new dataset of a surgical knot-tying task, and a multi-modal deep learning model that achieves comparable performance to expert human raters on this skill assessment task. Seventy-two surgical trainees and faculty were recruited for the knot-tying task, and were recorded using video, kinematic, and image data. Three expert human raters conducted the skills assessment using the Objective Structured Assessment of Technical Skill (OSATS) Global Rating Scale (GRS). We also designed and developed three deep learning models: a ResNet-based image model, a ResNet-LSTM kinematic model, and a multi-modal model leveraging the image and time-series kinematic data. All three models demonstrate performance comparable to the expert human raters on most GRS domains. The multi-modal model demonstrates the best overall performance, as measured using the mean squared error (MSE) and intraclass correlation coefficient (ICC). This work is significant since it demonstrates that multi-modal deep learning has the potential to replicate human raters on a challenging human-performed knot-tying task. The study demonstrates an algorithm with state-of-the-art performance in surgical skill assessment. As objective assessment of technical skill continues to be a growing, but resource-heavy, element of surgical education, this study is an important step towards automated surgical skill assessment, ultimately leading to reduced burden on training faculty and institutes. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14248220
Volume :
22
Issue :
19
Database :
Complementary Index
Journal :
Sensors (14248220)
Publication Type :
Academic Journal
Accession number :
159699406
Full Text :
https://doi.org/10.3390/s22197328