Back to Search Start Over

Randomized learning and generalization of fair and private classifiers: From PAC-Bayes to stability and differential privacy

Authors :
Massimiliano Pontil
Luca Oneto
John Shawe-Taylor
Michele Donini
Source :
Neurocomputing. 416:231-243
Publication Year :
2020
Publisher :
Elsevier BV, 2020.

Abstract

We address the problem of randomized learning and generalization of fair and private classifiers. From one side we want to ensure that sensitive information does not unfairly influence the outcome of a classifier. From the other side we have to learn from data while preserving the privacy of individual observations. We initially face this issue in the PAC-Bayes framework presenting an approach which trades off and bounds the risk and the fairness of the randomized (Gibbs) classifier. Our new approach is able to handle several different state-of-the-art fairness measures. For this purpose, we further develop the idea that the PAC-Bayes prior can be defined based on the data-generating distribution without actually knowing it. In particular, we define a prior and a posterior which give more weight to functions with good generalization and fairness properties. Furthermore, we will show that this randomized classifier possesses interesting stability properties using the algorithmic distribution stability theory. Finally, we will show that the new posterior can be exploited to define a randomized accurate and fair algorithm. Differential privacy theory will allow us to derive that the latter algorithm has interesting privacy preserving properties ensuring our threefold goal of good generalization, fairness, and privacy of the final model.

Details

ISSN :
09252312
Volume :
416
Database :
OpenAIRE
Journal :
Neurocomputing
Accession number :
edsair.doi.dedup.....67da9bba70472894893db0b489614bd9
Full Text :
https://doi.org/10.1016/j.neucom.2019.12.137