8 results on '"local interpretable model-agnostic explanations"'
Search Results
2. Explainable AI for Intrusion Prevention: A Review of Techniques and Applications
- Author
-
Chandre, Pankaj R., Vanarote, Viresh, Patil, Rajkumar, Mahalle, Parikshit N., Shinde, Gitanjali R., Nimbalkar, Madhukar, Barot, Janki, Kacprzyk, Janusz, Series Editor, Gomide, Fernando, Advisory Editor, Kaynak, Okyay, Advisory Editor, Liu, Derong, Advisory Editor, Pedrycz, Witold, Advisory Editor, Polycarpou, Marios M., Advisory Editor, Rudas, Imre J., Advisory Editor, Wang, Jun, Advisory Editor, Choudrie, Jyoti, editor, Mahalle, Parikshit N., editor, Perumal, Thinagaran, editor, and Joshi, Amit, editor
- Published
- 2023
- Full Text
- View/download PDF
3. Frontotemporal Dementia Detection Model Based on Explainable Machine Learning Approach
- Author
-
Poonam, Km, Guha, Rajlakshmi, Chakrabarti, Partha P., Rannenberg, Kai, Editor-in-Chief, Soares Barbosa, Luís, Editorial Board Member, Goedicke, Michael, Editorial Board Member, Tatnall, Arthur, Editorial Board Member, Neuhold, Erich J., Editorial Board Member, Stiller, Burkhard, Editorial Board Member, Stettner, Lukasz, Editorial Board Member, Pries-Heje, Jan, Editorial Board Member, Kreps, David, Editorial Board Member, Rettberg, Achim, Editorial Board Member, Furnell, Steven, Editorial Board Member, Mercier-Laurent, Eunika, Editorial Board Member, Winckler, Marco, Editorial Board Member, Malaka, Rainer, Editorial Board Member, Chandran K R, Sarath, editor, N, Sujaudeen, editor, A, Beulah, editor, and Hamead H, Shahul, editor
- Published
- 2023
- Full Text
- View/download PDF
4. Contextualized Embeddings from Transformers for Sentiment Analysis on Code-Mixed Hinglish Data: An Expanded Approach with Explainable Artificial Intelligence
- Author
-
Yadav, Sargam, Kaushik, Abhishek, Filipe, Joaquim, Editorial Board Member, Ghosh, Ashish, Editorial Board Member, Prates, Raquel Oliveira, Editorial Board Member, Zhou, Lizhu, Editorial Board Member, M, Anand Kumar, editor, Chakravarthi, Bharathi Raja, editor, B, Bharathi, editor, O’Riordan, Colm, editor, Murthy, Hema, editor, Durairaj, Thenmozhi, editor, and Mandl, Thomas, editor
- Published
- 2023
- Full Text
- View/download PDF
5. Explainable Soft Attentive EfficientNet for breast cancer classification in histopathological images.
- Author
-
Peta, Jyothi and Koppu, Srinivas
- Subjects
TUMOR classification ,IMAGE recognition (Computer vision) ,BREAST ,BREAST cancer ,TIME complexity ,FALSE discovery rate - Abstract
Breast Cancer (BC) is believed to be the cancer that occurs most frequently in women worldwide, taking the lives of it's the victims. In early diagnosis aids the patients to survive under greater probability. Several existing studies utilize diagnostic mechanisms via histopathology image for early identification of breast tumors. However, it increases the medical costs and consumes the time. Thus, in order to accurately classify the breast tumor, this study suggests a novel explainable DL technique. Using this technique, better accuracy is achieved while performing classifications. Improved accuracy may greatly help the medical practitioners for classifying breast cancer effectively. Initially, adaptive unsharp mask filtering (AUMF) technique is proposed to remove the noise and enhance the quality of the image. Finally, Explainable Soft Attentive EfficientNet (ESAE-Net) technique is introduced to classify the breast tumor (BT). Four explainable algorithms are investigated for improved visualizations over the BTs: Gradient-Weighted Class Activation Mapping (Grad-CAM) Shapley additive explanations (SHAP), Contextual Importance and Utility (CIU), and Local Interpretable Model-Agnostic Explanations (LIME). The suggested approach uses two publicly accessible images of breast histopathology and is carried out on a Python platform. Performance metrics such as time complexity, False Discovery Rate (FDR), accuracy, and Mathew's correlation coefficient (MCC) are examined and contrasted with traditional research. In the experimental section, the proposed obtains an accuracy of 97.85% for dataset 1 and accuracy of 98.05% for dataset 2. In comparison with other existing methods, the proposed method is more efficient while using ESAE-Net for classifying the Breast cancer. • A novel and effective XAI based breast cancer (BC) classification framework (ESAE-Net) that can provide better learning interpretations and decision making processes effectively. • To introduce a novel adaptive unsharp mask filtering (AUMF) technique for enhancing the quality of the breast histopathology image. • To propose a new and efficient Explainable Soft Attentive EfficientNet (ESAE-Net) model to deal with the classification of BC from histopathology images. • To improve the proposed ESAE-Net by introducing various interpretability procedures like LIME, SHAP, CIU and Grad-CAM. • To validate the proposed method by carrying out extensive simulations under different simulation scenarios to prove the effectiveness of the proposed framework compared to other XAI frameworks. [ABSTRACT FROM AUTHOR]
- Published
- 2024
- Full Text
- View/download PDF
6. Comprehensible Machine-Learning-Based Models for the Pre-Emptive Diagnosis of Multiple Sclerosis Using Clinical Data: A Retrospective Study in the Eastern Province of Saudi Arabia
- Author
-
Sunday O. Olatunji, Nawal Alsheikh, Lujain Alnajrani, Alhatoon Alanazy, Meshael Almusairii, Salam Alshammasi, Aisha Alansari, Rim Zaghdoud, Alaa Alahmadi, Mohammed Imran Basheer Ahmed, Mohammed Salih Ahmed, and Jamal Alhiyafi
- Subjects
shapley additive explanation ,local interpretable model-agnostic explanations ,machine learning ,explainable artificial intelligence ,Health, Toxicology and Mutagenesis ,pre-emptive diagnosis ,Public Health, Environmental and Occupational Health ,multiple sclerosis - Abstract
Multiple Sclerosis (MS) is characterized by chronic deterioration of the nervous system, mainly the brain and the spinal cord. An individual with MS develops the condition when the immune system begins attacking nerve fibers and the myelin sheathing that covers them, affecting the communication between the brain and the rest of the body and eventually causing permanent damage to the nerve. Patients with MS (pwMS) might experience different symptoms depending on which nerve was damaged and how much damage it has sustained. Currently, there is no cure for MS; however, there are clinical guidelines that help control the disease and its accompanying symptoms. Additionally, no specific laboratory biomarker can precisely identify the presence of MS, leaving specialists with a differential diagnosis that relies on ruling out other possible diseases with similar symptoms. Since the emergence of Machine Learning (ML) in the healthcare industry, it has become an effective tool for uncovering hidden patterns that aid in diagnosing several ailments. Several studies have been conducted to diagnose MS using ML and Deep Learning (DL) models trained using MRI images, achieving promising results. However, complex and expensive diagnostic tools are needed to collect and examine imaging data. Thus, the intention of this study is to implement a cost-effective, clinical data-driven model that is capable of diagnosing pwMS. The dataset was obtained from King Fahad Specialty Hospital (KFSH) in Dammam, Saudi Arabia. Several ML algorithms were compared, namely Support Vector Machine (SVM), Decision Tree (DT), Logistic Regression (LR), Random Forest (RF), Extreme Gradient Boosting (XGBoost), Adaptive Boosting (AdaBoost), and Extra Trees (ET). The results indicated that the ET model outpaced the rest with an accuracy of 94.74%, recall of 97.26%, and precision of 94.67%.
- Published
- 2023
- Full Text
- View/download PDF
7. Explainable Reinforcement Learning for Gameplay
- Author
-
Costa Sánchez, Àlex
- Subjects
Local Interpretable Model-agnostic Explanations ,Förstärkningsinlärning ,Computer and Information Sciences ,Explainable Artificial Intelligence ,Intel·ligencia Artificial Interpretable ,Data- och informationsvetenskap ,Förklarbar artificiell intelligent ,Reinforcement Learning ,Shapleys additiv förklaringar ,Explicacions model-agnòstiques localment interpretables ,Aprenentatge per reforç ,Shapley Additive Explanations ,Lokala tolkningsbara modellagnostiska förklaringar ,Explicacions additives de Shapley - Abstract
State-of-the-art Machine Learning (ML) algorithms show impressive results for a myriad of applications. However, they operate as a sort of a black box: the decisions taken are not human-understandable. There is a need for transparency and interpretability of ML predictions to be wider accepted in society, especially in specific fields such as medicine or finance. Most of the efforts so far have focused on explaining supervised learning. This project aims to use some of these successful explainability algorithms and apply them to Reinforcement Learning (RL). To do so, we explain the actions of a RL agent playing Atari’s Breakout game, using two different explainability algorithms: Shapley Additive Explanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME). We successfully implement both algorithms, which yield credible and insightful explanations of the mechanics of the agent. However, we think the final presentation of the results is sub-optimal for the final user, as it is not intuitive at first sight. De senaste algoritmerna för maskininlärning (ML) visar imponerande resultat för en mängd olika tillämpningar. De fungerar dock som ett slags ”svart låda”: de beslut som fattas är inte begripliga för människor. Det finns ett behov av öppenhet och tolkningsbarhet för ML-prognoser för att de ska bli mer accepterade i samhället, särskilt inom specifika områden som medicin och ekonomi. De flesta insatser hittills har fokuserat på att förklara övervakad inlärning. Syftet med detta projekt är att använda några av dessa framgångsrika algoritmer för att förklara och tillämpa dem på förstärkning lärande (Reinforcement Learning, RL). För att göra detta förklarar vi handlingarna hos en RL-agent som spelar Ataris Breakout-spel med hjälp av två olika förklaringsalgoritmer: Shapley Additive Explanations (SHAP) och Local Interpretable Model-agnostic Explanations (LIME). Vi genomför framgångsrikt båda algoritmerna, som ger trovärdiga och insiktsfulla förklaringar av agentens mekanik. Vi anser dock att den slutliga presentationen av resultaten inte är optimal för slutanvändaren, eftersom den inte är intuitiv vid första anblicken. Els algoritmes d’aprenentatge automàtic (Machine Learning, ML) d’última generació mostren resultats impressionants per a moltes aplicacions. Tot i això, funcionen com una mena de caixa negra: les decisions preses no són comprensibles per a l’ésser humà. Per tal que les prediccion preses mitjançant ML siguin més acceptades a la societat, especialment en camps específics com la medicina o les finances, cal transparència i interpretabilitat. La majoria dels esforços que s’han fet fins ara s’han centrat a explicar l’aprenentatge supervisat (supervised learning). Aquest projecte pretén utilitzar alguns d’aquests existosos algoritmes d’explicabilitat i aplicar-los a l’aprenentatge per reforç (Reinforcement Learning, RL). Per fer-ho, expliquem les accions d’un agent de RL que juga al joc Breakout d’Atari utilitzant dos algoritmes diferents: explicacions additives de Shapley (SHAP) i explicacions model-agnòstiques localment interpretables (LIME). Hem implementat amb èxit tots dos algoritmes, que produeixen explicacions creïbles i interessants de la mecànica de l’agent. Tanmateix, creiem que la presentació final dels resultats no és òptima per a l’usuari final, ja que no és intuïtiva a primera vista.
- Published
- 2022
8. Interpreting Multivariate Time Series for an Organization Health Platform
- Author
-
Saluja, Rohit
- Subjects
Computer and Information Sciences ,Time series ,Local interpretable model-agnostic explanations ,Tolkbarhet ,Shapley additiva förklaringar ,Prognoser ,Tidsserier ,Data- och informationsvetenskap ,Förklarbar artificiell intelligens ,Shapley additive explanations ,Lokala tolkningsbara modell-agnostiska förklaringar ,Interpretability ,Explainable artificial intelligence ,Forecasting - Abstract
Machine learning-based systems are rapidly becoming popular because it has been realized that machines are more efficient and effective than humans at performing certain tasks. Although machine learning algorithms are extremely popular, they are also very literal and undeviating. This has led to a huge research surge in the field of interpretability in machine learning to ensure that machine learning models are reliable, fair, and can be held liable for their decision-making process. Moreover, in most real-world problems just making predictions using machine learning algorithms only solves the problem partially. Time series is one of the most popular and important data types because of its dominant presence in the fields of business, economics, and engineering. Despite this, interpretability in time series is still relatively unexplored as compared to tabular, text, and image data. With the growing research in the field of interpretability in machine learning, there is also a pressing need to be able to quantify the quality of explanations produced after interpreting machine learning models. Due to this reason, evaluation of interpretability is extremely important. The evaluation of interpretability for models built on time series seems completely unexplored in research circles. This thesis work focused on achieving and evaluating model agnostic interpretability in a time series forecasting problem. The use case discussed in this thesis work focused on finding a solution to a problem faced by a digital consultancy company. The digital consultancy wants to take a data-driven approach to understand the effect of various sales related activities in the company on the sales deals closed by the company. The solution involved framing the problem as a time series forecasting problem to predict the sales deals and interpreting the underlying forecasting model. The interpretability was achieved using two novel model agnostic interpretability techniques, Local interpretable model- agnostic explanations (LIME) and Shapley additive explanations (SHAP). The explanations produced after achieving interpretability were evaluated using human evaluation of interpretability. The results of the human evaluation studies clearly indicate that the explanations produced by LIME and SHAP greatly helped lay humans in understanding the predictions made by the machine learning model. The human evaluation study results also indicated that LIME and SHAP explanations were almost equally understandable with LIME performing better but with a very small margin. The work done during this project can easily be extended to any time series forecasting or classification scenario for achieving and evaluating interpretability. Furthermore, this work can offer a very good framework for achieving and evaluating interpretability in any machine learning-based regression or classification problem. Maskininlärningsbaserade system blir snabbt populära eftersom man har insett att maskiner är effektivare än människor när det gäller att utföra vissa uppgifter. Även om maskininlärningsalgoritmer är extremt populära, är de också mycket bokstavliga. Detta har lett till en enorm forskningsökning inom området tolkbarhet i maskininlärning för att säkerställa att maskininlärningsmodeller är tillförlitliga, rättvisa och kan hållas ansvariga för deras beslutsprocess. Dessutom löser problemet i de flesta verkliga problem bara att göra förutsägelser med maskininlärningsalgoritmer bara delvis. Tidsserier är en av de mest populära och viktiga datatyperna på grund av dess dominerande närvaro inom affärsverksamhet, ekonomi och teknik. Trots detta är tolkningsförmågan i tidsserier fortfarande relativt outforskad jämfört med tabell-, text- och bilddata. Med den växande forskningen inom området tolkbarhet inom maskininlärning finns det också ett stort behov av att kunna kvantifiera kvaliteten på förklaringar som produceras efter tolkning av maskininlärningsmodeller. Av denna anledning är utvärdering av tolkbarhet extremt viktig. Utvärderingen av tolkbarhet för modeller som bygger på tidsserier verkar helt outforskad i forskarkretsar. Detta uppsatsarbete fokuserar på att uppnå och utvärdera agnostisk modelltolkbarhet i ett tidsserieprognosproblem. Fokus ligger i att hitta lösningen på ett problem som ett digitalt konsultföretag står inför som användningsfall. Det digitala konsultföretaget vill använda en datadriven metod för att förstå effekten av olika försäljningsrelaterade aktiviteter i företaget på de försäljningsavtal som företaget stänger. Lösningen innebar att inrama problemet som ett tidsserieprognosproblem för att förutsäga försäljningsavtalen och tolka den underliggande prognosmodellen. Tolkningsförmågan uppnåddes med hjälp av två nya tekniker för agnostisk tolkbarhet, lokala tolkbara modellagnostiska förklaringar (LIME) och Shapley additiva förklaringar (SHAP). Förklaringarna som producerats efter att ha uppnått tolkbarhet utvärderades med hjälp av mänsklig utvärdering av tolkbarhet. Resultaten av de mänskliga utvärderingsstudierna visar tydligt att de förklaringar som produceras av LIME och SHAP starkt hjälpte människor att förstå förutsägelserna från maskininlärningsmodellen. De mänskliga utvärderingsstudieresultaten visade också att LIME- och SHAP-förklaringar var nästan lika förståeliga med LIME som presterade bättre men med en mycket liten marginal. Arbetet som utförts under detta projekt kan enkelt utvidgas till alla tidsserieprognoser eller klassificeringsscenarier för att uppnå och utvärdera tolkbarhet. Dessutom kan detta arbete erbjuda en mycket bra ram för att uppnå och utvärdera tolkbarhet i alla maskininlärningsbaserade regressions- eller klassificeringsproblem.
- Published
- 2020
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.