4 results
Search Results
2. How does the model make predictions? A systematic literature review on the explainability power of machine learning in healthcare.
- Author
-
Allgaier, Johannes, Mulansky, Lena, Draelos, Rachel Lea, and Pryss, Rüdiger
- Subjects
- *
MACHINE learning , *DECISION support systems , *ARTIFICIAL intelligence , *PREDICTION models , *INFORMATION sharing - Abstract
Medical use cases for machine learning (ML) are growing exponentially. The first hospitals are already using ML systems as decision support systems in their daily routine. At the same time, most ML systems are still opaque and it is not clear how these systems arrive at their predictions. In this paper, we provide a brief overview of the taxonomy of explainability methods and review popular methods. In addition, we conduct a systematic literature search on PubMed to investigate which explainable artificial intelligence (XAI) methods are used in 450 specific medical supervised ML use cases, how the use of XAI methods has emerged recently, and how the precision of describing ML pipelines has evolved over the past 20 years. A large fraction of publications with ML use cases do not use XAI methods at all to explain ML predictions. However, when XAI methods are used, open-source and model-agnostic explanation methods are more commonly used, with SHapley Additive exPlanations (SHAP) and Gradient Class Activation Mapping (Grad-CAM) for tabular and image data leading the way. ML pipelines have been described in increasing detail and uniformity in recent years. However, the willingness to share data and code has stagnated at about one-quarter. XAI methods are mainly used when their application requires little effort. The homogenization of reports in ML use cases facilitates the comparability of work and should be advanced in the coming years. Experts who can mediate between the worlds of informatics and medicine will become more and more in demand when using ML systems due to the high complexity of the domain. [Display omitted] • We estimate that only 16 % of the reported explainability methods could be understood by patients. • The distribution of data types for explainable ML applications for tabular | image | text | audio data is 51 % | 32 % | 3 % | 0 %. • The quality of the description of machine learning pipelines increased in recent years with more homogeneity. • The data and code sharing ratio stagnated in about one quarter. • Most popular explainability methods are SHAP, LIME, and Grad-CAM. [ABSTRACT FROM AUTHOR]
- Published
- 2023
- Full Text
- View/download PDF
3. Thirty years of artificial intelligence in medicine (AIME) conferences: A review of research themes.
- Author
-
Peek, Niels, Combi, Carlo, Marin, Roque, and Bellazzi, Riccardo
- Subjects
- *
ARTIFICIAL intelligence in medicine , *COMPUTERS in medicine , *MEDICAL care conferences , *MEDICAL terminology , *MEDICAL technology , *BIOTECHNOLOGY , *ARTIFICIAL intelligence , *MEDICAL research , *DIGITAL image processing , *SIGNAL processing equipment , *CONFERENCES & conventions , *MEDICINE , *RESEARCH funding , *UNCERTAINTY , *DATA mining , *EQUIPMENT & supplies - Abstract
Background: Over the past 30 years, the international conference on Artificial Intelligence in MEdicine (AIME) has been organized at different venues across Europe every 2 years, establishing a forum for scientific exchange and creating an active research community. The Artificial Intelligence in Medicine journal has published theme issues with extended versions of selected AIME papers since 1998.Objectives: To review the history of AIME conferences, investigate its impact on the wider research field, and identify challenges for its future.Methods: We analyzed a total of 122 session titles to create a taxonomy of research themes and topics. We classified all 734 AIME conference papers published between 1985 and 2013 with this taxonomy. We also analyzed the citations to these conference papers and to 55 special issue papers.Results: We identified 30 research topics across 12 themes. AIME was dominated by knowledge engineering research in its first decade, while machine learning and data mining prevailed thereafter. Together these two themes have contributed about 51% of all papers. There have been eight AIME papers that were cited at least 10 times per year since their publication.Conclusions: There has been a major shift from knowledge-based to data-driven methods while the interest for other research themes such as uncertainty management, image and signal processing, and natural language processing has been stable since the early 1990s. AIME papers relating to guidelines and protocols are among the most highly cited. [ABSTRACT FROM AUTHOR]- Published
- 2015
- Full Text
- View/download PDF
4. A survey of deep learning models in medical therapeutic areas.
- Author
-
Nogales, Alberto, García-Tejedor, Álvaro J., Monge, Diana, Vara, Juan Serrano, and Antón, Cristina
- Subjects
- *
CONVOLUTIONAL neural networks , *ARTIFICIAL intelligence , *IMAGE analysis , *DIAGNOSIS - Abstract
Artificial intelligence is a broad field that comprises a wide range of techniques, where deep learning is presently the one with the most impact. Moreover, the medical field is an area where data both complex and massive and the importance of the decisions made by doctors make it one of the fields in which deep learning techniques can have the greatest impact. A systematic review following the Cochrane recommendations with a multidisciplinary team comprised of physicians, research methodologists and computer scientists has been conducted. This survey aims to identify the main therapeutic areas and the deep learning models used for diagnosis and treatment tasks. The most relevant databases included were MedLine, Embase, Cochrane Central, Astrophysics Data System, Europe PubMed Central, Web of Science and Science Direct. An inclusion and exclusion criteria were defined and applied in the first and second peer review screening. A set of quality criteria was developed to select the papers obtained after the second screening. Finally, 126 studies from the initial 3493 papers were selected and 64 were described. Results show that the number of publications on deep learning in medicine is increasing every year. Also, convolutional neural networks are the most widely used models and the most developed area is oncology where they are used mainly for image analysis. [ABSTRACT FROM AUTHOR]
- Published
- 2021
- Full Text
- View/download PDF
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.