Back to Search Start Over

Ease of adoption of clinical natural language processing software: An evaluation of five systems.

Authors :
Zheng, Kai
Vydiswaran, V.G. Vinod
Liu, Yang
Wang, Yue
Stubbs, Amber
Uzuner, Özlem
Gururaj, Anupama E.
Bayer, Samuel
Aberdeen, John
Rumshisky, Anna
Pakhomov, Serguei
Liu, Hongfang
Xu, Hua
Source :
Journal of Biomedical Informatics; Dec2015 Supplement, Vol. 58, pS189-S196, 1p
Publication Year :
2015

Abstract

<bold>Objective: </bold>In recognition of potential barriers that may inhibit the widespread adoption of biomedical software, the 2014 i2b2 Challenge introduced a special track, Track 3 - Software Usability Assessment, in order to develop a better understanding of the adoption issues that might be associated with the state-of-the-art clinical NLP systems. This paper reports the ease of adoption assessment methods we developed for this track, and the results of evaluating five clinical NLP system submissions.<bold>Materials and Methods: </bold>A team of human evaluators performed a series of scripted adoptability test tasks with each of the participating systems. The evaluation team consisted of four "expert evaluators" with training in computer science, and eight "end user evaluators" with mixed backgrounds in medicine, nursing, pharmacy, and health informatics. We assessed how easy it is to adopt the submitted systems along the following three dimensions: communication effectiveness (i.e., how effective a system is in communicating its designed objectives to intended audience), effort required to install, and effort required to use. We used a formal software usability testing tool, TURF, to record the evaluators' interactions with the systems and 'think-aloud' data revealing their thought processes when installing and using the systems and when resolving unexpected issues.<bold>Results: </bold>Overall, the ease of adoption ratings that the five systems received are unsatisfactory. Installation of some of the systems proved to be rather difficult, and some systems failed to adequately communicate their designed objectives to intended adopters. Further, the average ratings provided by the end user evaluators on ease of use and ease of interpreting output are -0.35 and -0.53, respectively, indicating that this group of users generally deemed the systems extremely difficult to work with. While the ratings provided by the expert evaluators are higher, 0.6 and 0.45, respectively, these ratings are still low indicating that they also experienced considerable struggles.<bold>Discussion: </bold>The results of the Track 3 evaluation show that the adoptability of the five participating clinical NLP systems has a great margin for improvement. Remedy strategies suggested by the evaluators included (1) more detailed and operation system specific use instructions; (2) provision of more pertinent onscreen feedback for easier diagnosis of problems; (3) including screen walk-throughs in use instructions so users know what to expect and what might have gone wrong; (4) avoiding jargon and acronyms in materials intended for end users; and (5) packaging prerequisites required within software distributions so that prospective adopters of the software do not have to obtain each of the third-party components on their own. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
15320464
Volume :
58
Database :
Supplemental Index
Journal :
Journal of Biomedical Informatics
Publication Type :
Academic Journal
Accession number :
111322811
Full Text :
https://doi.org/10.1016/j.jbi.2015.07.008