Back to Search Start Over

Interpreting a Penalty as the Influence of a Bayesian Prior

Authors :
Wolinski, Pierre
Charpiat, Guillaume
Ollivier, Yann
TAckling the Underspecified (TAU)
Inria Saclay - Ile de France
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Laboratoire de Recherche en Informatique (LRI)
CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)-CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)
Facebook AI Research [Paris] (FAIR)
Facebook
Laboratoire de Recherche en Informatique (LRI)
CentraleSupélec-Université Paris-Saclay-Centre National de la Recherche Scientifique (CNRS)
Publication Year :
2020
Publisher :
HAL CCSD, 2020.

Abstract

24 pages, including 2 pages of references and 10 pages of appendix; In machine learning, it is common to optimize the parameters of a probabilistic model, modulated by a somewhat ad hoc regularization term that penalizes some values of the parameters. Regularization terms appear naturally in Variational Inference (VI), a tractable way to approximate Bayesian posteriors: the loss to optimize contains a Kullback--Leibler divergence term between the approximate posterior and a Bayesian prior. We fully characterize which regularizers can arise this way, and provide a systematic way to compute the corresponding prior. This viewpoint also provides a prediction for useful values of the regularization factor in neural networks. We apply this framework to regularizers such as L1 or group-Lasso.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.dedup.wf.001..ce8511993fb0365500bdb02a62540b00