Back to Search Start Over

Interpretation of linear classifiers by means of feature relevance bounds.

Authors :
Göpfert, Christina
Pfannschmidt, Lukas
Göpfert, Jan Philip
Hammer, Barbara
Source :
Neurocomputing. Jul2018, Vol. 298, p69-79. 11p.
Publication Year :
2018

Abstract

Research on feature relevance and feature selection problems goes back several decades, but the importance of these areas continues to grow as more and more data becomes available, and machine learning methods are used to gain insight and interpret, rather than solely to solve classification or regression problems. Despite the fact that feature relevance is often discussed, it is frequently poorly defined, and the feature selection problems studied are subtly different. Furthermore, the problem of finding all features relevant for a classification problem has only recently started to gain traction, despite its importance for interpretability and integrating expert knowledge. In this paper, we attempt to unify commonly used concepts and to give an overview of the main questions and results. We formalize two interpretations of the all-relevant problem and propose a polynomial method to approximate one of them for the important hypothesis class of linear classifiers, which also enables a distinction between strongly and weakly relevant features. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
298
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
129273396
Full Text :
https://doi.org/10.1016/j.neucom.2017.11.074