Back to Search Start Over

An Experimental Study on Private Aggregation of Teacher Ensemble Learning for End-to-End Speech Recognition

Authors :
Yang, Chao-Han Huck
Chen, I-Fan
Stolcke, Andreas
Siniscalchi, Sabato Marco
Lee, Chin-Hui
Publication Year :
2022

Abstract

Differential privacy (DP) is one data protection avenue to safeguard user information used for training deep models by imposing noisy distortion on privacy data. Such a noise perturbation often results in a severe performance degradation in automatic speech recognition (ASR) in order to meet a privacy budget $\varepsilon$. Private aggregation of teacher ensemble (PATE) utilizes ensemble probabilities to improve ASR accuracy when dealing with the noise effects controlled by small values of $\varepsilon$. We extend PATE learning to work with dynamic patterns, namely speech utterances, and perform a first experimental demonstration that it prevents acoustic data leakage in ASR training. We evaluate three end-to-end deep models, including LAS, hybrid CTC/attention, and RNN transducer, on the open-source LibriSpeech and TIMIT corpora. PATE learning-enhanced ASR models outperform the benchmark DP-SGD mechanisms, especially under strict DP budgets, giving relative word error rate reductions between 26.2% and 27.5% for an RNN transducer model evaluated with LibriSpeech. We also introduce a DP-preserving ASR solution for pretraining on public speech corpora.<br />Comment: 5 pages. Accepted to IEEE SLT 2022. A first version draft was finished in Aug 2021

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2210.05614
Document Type :
Working Paper