Back to Search
Start Over
Exploring local interpretability in dimensionality reduction: Analysis and use cases.
- Source :
-
Expert Systems with Applications . Oct2024:Part A, Vol. 252, pN.PAG-N.PAG. 1p. - Publication Year :
- 2024
-
Abstract
- Dimensionality reduction is a crucial area in artificial intelligence that enables the visualization and analysis of high-dimensional data. The main use of dimensionality reduction is to lower the dimensional complexity of data, improving the performance of machine learning models. Non-linear dimensionality reduction approaches, which provide higher quality representations than linear ones, lack interpretability, prohibiting their application in tasks requiring interpretability. This paper presents LXDR (Local eXplanation of Dimensionality Reduction), a local, model-agnostic technique that can be applied to any DR technique. LXDR trains linear models around a neighborhood of a specific instance and provides local interpretations using a variety of neighborhood generation techniques. Variations of the proposed technique are also introduced. The effectiveness of LXDR's interpretations is evaluated by quantitative and qualitative experiments, as well as demonstrations of its practical implementation in diverse use cases. The experiments emphasize the importance of interpretability in dimensionality reduction and how LXDR reinforces it. • Non-linear dimensionality reduction is necessary in several use cases. • Interpretability is vital for most of those use cases. • LXDR agnostically interprets any dimensionality reduction technique. • LXDR is applicable to various domains and use cases. [ABSTRACT FROM AUTHOR]
- Subjects :
- *ARTIFICIAL intelligence
*MACHINE performance
Subjects
Details
- Language :
- English
- ISSN :
- 09574174
- Volume :
- 252
- Database :
- Academic Search Index
- Journal :
- Expert Systems with Applications
- Publication Type :
- Academic Journal
- Accession number :
- 177746619
- Full Text :
- https://doi.org/10.1016/j.eswa.2024.124074