Back to Search Start Over

Automating annotation of information-giving for analysis of clinical conversation.

Authors :
Mayfield E
Laws MB
Wilson IB
Penstein Rosé C
Source :
Journal of the American Medical Informatics Association : JAMIA [J Am Med Inform Assoc] 2014 Feb; Vol. 21 (e1), pp. e122-8. Date of Electronic Publication: 2013 Sep 12.
Publication Year :
2014

Abstract

Objective: Coding of clinical communication for fine-grained features such as speech acts has produced a substantial literature. However, annotation by humans is laborious and expensive, limiting application of these methods. We aimed to show that through machine learning, computers could code certain categories of speech acts with sufficient reliability to make useful distinctions among clinical encounters.<br />Materials and Methods: The data were transcripts of 415 routine outpatient visits of HIV patients which had previously been coded for speech acts using the Generalized Medical Interaction Analysis System (GMIAS); 50 had also been coded for larger scale features using the Comprehensive Analysis of the Structure of Encounters System (CASES). We aggregated selected speech acts into information-giving and requesting, then trained the machine to automatically annotate using logistic regression classification. We evaluated reliability by per-speech act accuracy. We used multiple regression to predict patient reports of communication quality from post-visit surveys using the patient and provider information-giving to information-requesting ratio (briefly, information-giving ratio) and patient gender.<br />Results: Automated coding produces moderate reliability with human coding (accuracy 71.2%, κ=0.57), with high correlation between machine and human prediction of the information-giving ratio (r=0.96). The regression significantly predicted four of five patient-reported measures of communication quality (r=0.263-0.344).<br />Discussion: The information-giving ratio is a useful and intuitive measure for predicting patient perception of provider-patient communication quality. These predictions can be made with automated annotation, which is a practical option for studying large collections of clinical encounters with objectivity, consistency, and low cost, providing greater opportunity for training and reflection for care providers.

Details

Language :
English
ISSN :
1527-974X
Volume :
21
Issue :
e1
Database :
MEDLINE
Journal :
Journal of the American Medical Informatics Association : JAMIA
Publication Type :
Academic Journal
Accession number :
24029598
Full Text :
https://doi.org/10.1136/amiajnl-2013-001898