Back to Search Start Over

Training Large ASR Encoders with Differential Privacy

Authors :
Chauhan, Geeticka
Chien, Steve
Thakkar, Om
Thakurta, Abhradeep
Narayanan, Arun
Publication Year :
2024

Abstract

Self-supervised learning (SSL) methods for large speech models have proven to be highly effective at ASR. With the interest in public deployment of large pre-trained models, there is a rising concern for unintended memorization and leakage of sensitive data points from the training data. In this paper, we apply differentially private (DP) pre-training to a SOTA Conformer-based encoder, and study its performance on a downstream ASR task assuming the fine-tuning data is public. This paper is the first to apply DP to SSL for ASR, investigating the DP noise tolerance of the BEST-RQ pre-training method. Notably, we introduce a novel variant of model pruning called gradient-based layer freezing that provides strong improvements in privacy-utility-compute trade-offs. Our approach yields a LibriSpeech test-clean/other WER (%) of 3.78/ 8.41 with ($10$, 1e^-9)-DP for extrapolation towards low dataset scales, and 2.81/ 5.89 with (10, 7.9e^-11)-DP for extrapolation towards high scales.<br />Comment: In proceedings of the IEEE Spoken Language Technologies Workshop, 2024

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2409.13953
Document Type :
Working Paper