Back to Search Start Over

Self-Reinforcement Attention Mechanism For Tabular Learning

Authors :
Amekoe, Kodjo Mawuena
Dilmi, Mohamed Djallel
Azzag, Hanene
Lebbah, Mustapha
Dagdia, Zaineb Chelly
Jaffre, Gregoire
Publication Year :
2023

Abstract

Apart from the high accuracy of machine learning models, what interests many researchers in real-life problems (e.g., fraud detection, credit scoring) is to find hidden patterns in data; particularly when dealing with their challenging imbalanced characteristics. Interpretability is also a key requirement that needs to accompany the used machine learning model. In this concern, often, intrinsically interpretable models are preferred to complex ones, which are in most cases black-box models. Also, linear models are used in some high-risk fields to handle tabular data, even if performance must be sacrificed. In this paper, we introduce Self-Reinforcement Attention (SRA), a novel attention mechanism that provides a relevance of features as a weight vector which is used to learn an intelligible representation. This weight is then used to reinforce or reduce some components of the raw input through element-wise vector multiplication. Our results on synthetic and real-world imbalanced data show that our proposed SRA block is effective in end-to-end combination with baseline models.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2305.11684
Document Type :
Working Paper