Back to Search Start Over

Information Theoretic Evaluation of Privacy-Leakage, Interpretability, and Transferability for Trustworthy AI

Authors :
Kumar, Mohit
Moser, Bernhard A.
Fischer, Lukas
Freudenthaler, Bernhard
Publication Year :
2021

Abstract

In order to develop machine learning and deep learning models that take into account the guidelines and principles of trustworthy AI, a novel information theoretic trustworthy AI framework is introduced. A unified approach to "privacy-preserving interpretable and transferable learning" is considered for studying and optimizing the tradeoffs between privacy, interpretability, and transferability aspects. A variational membership-mapping Bayesian model is used for the analytical approximations of the defined information theoretic measures for privacy-leakage, interpretability, and transferability. The approach consists of approximating the information theoretic measures via maximizing a lower-bound using variational optimization. The study presents a unified information theoretic approach to study different aspects of trustworthy AI in a rigorous analytical manner. The approach is demonstrated through numerous experiments on benchmark datasets and a real-world biomedical application concerned with the detection of mental stress on individuals using heart rate variability analysis.<br />Comment: arXiv admin note: text overlap with arXiv:2105.04615, arXiv:2104.07060

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2106.06046
Document Type :
Working Paper