Back to Search Start Over

Fine-Tuning Large Language Models with User-Level Differential Privacy

Authors :
Charles, Zachary
Ganesh, Arun
McKenna, Ryan
McMahan, H. Brendan
Mitchell, Nicole
Pillutla, Krishna
Rush, Keith
Publication Year :
2024

Abstract

We investigate practical and scalable algorithms for training large language models (LLMs) with user-level differential privacy (DP) in order to provably safeguard all the examples contributed by each user. We study two variants of DP-SGD with: (1) example-level sampling (ELS) and per-example gradient clipping, and (2) user-level sampling (ULS) and per-user gradient clipping. We derive a novel user-level DP accountant that allows us to compute provably tight privacy guarantees for ELS. Using this, we show that while ELS can outperform ULS in specific settings, ULS generally yields better results when each user has a diverse collection of examples. We validate our findings through experiments in synthetic mean estimation and LLM fine-tuning tasks under fixed compute budgets. We find that ULS is significantly better in settings where either (1) strong privacy guarantees are required, or (2) the compute budget is large. Notably, our focus on LLM-compatible training algorithms allows us to scale to models with hundreds of millions of parameters and datasets with hundreds of thousands of users.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.07737
Document Type :
Working Paper