Back to Search Start Over

When Privacy Meets Partial Information: A Refined Analysis of Differentially Private Bandits

Authors :
Azize, Achraf
Basu, Debabrota
Scool (Scool)
Inria Lille - Nord Europe
Institut National de Recherche en Informatique et en Automatique (Inria)-Institut National de Recherche en Informatique et en Automatique (Inria)-Centre de Recherche en Informatique, Signal et Automatique de Lille - UMR 9189 (CRIStAL)
Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)-Centrale Lille-Université de Lille-Centre National de la Recherche Scientifique (CNRS)
We thanks to AI_PhD@Lille grant.
Source :
Advances in Neural Information Processing Systems, Advances in Neural Information Processing Systems, Dec 2022, New Orleans, United States
Publication Year :
2022
Publisher :
arXiv, 2022.

Abstract

We study the problem of multi-armed bandits with $\epsilon$-global Differential Privacy (DP). First, we prove the minimax and problem-dependent regret lower bounds for stochastic and linear bandits that quantify the hardness of bandits with $\epsilon$-global DP. These bounds suggest the existence of two hardness regimes depending on the privacy budget $\epsilon$. In the high-privacy regime (small $\epsilon$), the hardness depends on a coupled effect of privacy and partial information about the reward distributions. In the low-privacy regime (large $\epsilon$), bandits with $\epsilon$-global DP are not harder than the bandits without privacy. For stochastic bandits, we further propose a generic framework to design a near-optimal $\epsilon$ global DP extension of an index-based optimistic bandit algorithm. The framework consists of three ingredients: the Laplace mechanism, arm-dependent adaptive episodes, and usage of only the rewards collected in the last episode for computing private statistics. Specifically, we instantiate $\epsilon$-global DP extensions of UCB and KL-UCB algorithms, namely AdaP-UCB and AdaP-KLUCB. AdaP-KLUCB is the first algorithm that both satisfies $\epsilon$-global DP and yields a regret upper bound that matches the problem-dependent lower bound up to multiplicative constants.<br />Comment: Appears in NeurIPS 2022. From v1, the minimax lower bound for linear bandits is changed to $O(\max(d \sqrt{T}, d/\epsilon))$

Details

Database :
OpenAIRE
Journal :
Advances in Neural Information Processing Systems, Advances in Neural Information Processing Systems, Dec 2022, New Orleans, United States
Accession number :
edsair.doi.dedup.....ad7b5c43efbfb8e91a03d474425f1f8d
Full Text :
https://doi.org/10.48550/arxiv.2209.02570