Back to Search Start Over

Tighter Bounds on the Information Bottleneck with Application to Deep Learning

Authors :
Weingarten, Nir
Yakhini, Zohar
Butman, Moshe
Gilad-Bachrach, Ran
Publication Year :
2024

Abstract

Deep Neural Nets (DNNs) learn latent representations induced by their downstream task, objective function, and other parameters. The quality of the learned representations impacts the DNN's generalization ability and the coherence of the emerging latent space. The Information Bottleneck (IB) provides a hypothetically optimal framework for data modeling, yet it is often intractable. Recent efforts combined DNNs with the IB by applying VAE-inspired variational methods to approximate bounds on mutual information, resulting in improved robustness to adversarial attacks. This work introduces a new and tighter variational bound for the IB, improving performance of previous IB-inspired DNNs. These advancements strengthen the case for the IB and its variational approximations as a data modeling framework, and provide a simple method to significantly enhance the adversarial robustness of classifier DNNs.<br />Comment: 10 pages, 5 figures, code included in github repo

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.07639
Document Type :
Working Paper