Back to Search Start Over

Compact deep neural networks for real-time speech enhancement on resource-limited devices.

Authors :
Wahab, Fazal E
Ye, Zhongfu
Saleem, Nasir
Ullah, Rizwan
Source :
Speech Communication. Jan2024, Vol. 156, pN.PAG-N.PAG. 1p.
Publication Year :
2024

Abstract

In real-time applications, the aim of speech enhancement (SE) is to achieve optimal performance while ensuring computational efficiency and near-instant outputs. Many deep neural models have achieved optimal performance in terms of speech quality and intelligibility. However, formulating efficient and compact deep neural models for real-time processing on resource-limited devices remains a challenge. This study presents a compact neural model designed in a complex frequency domain for speech enhancement, optimized for resource-limited devices. The proposed model combines convolutional encoder–decoder and recurrent architectures to effectively learn complex mappings from noisy speech for real-time speech enhancement, enabling low-latency causal processing. Recurrent architectures such as Long-Short Term Memory (LSTM), Gated Recurrent Unit (GRU), and Simple Recurrent Unit (SRU), are incorporated as bottlenecks to capture temporal dependencies and improve the performance of SE. By representing the speech in the complex frequency domain, the proposed model processes both magnitude and phase information. Further, this study extends the proposed models and incorporates attention-gate-based skip connections, enabling the models to focus on relevant information and dynamically weigh the important features. The results show that the proposed models outperform the recent benchmark models and obtain better speech quality and intelligibility. The proposed models show less computational load and deliver better results. This study uses the WSJ0 database where clean sentences from WSJ0 are mixed with different background noises to create noisy mixtures. The results show that STOI and PESQ are improved by 21.1% and 1.25 (41.5%) on the WSJ0 database whereas, on the VoiceBank+DEMAND database, STOI and PESQ are improved by 4.1% and 1.24 (38.6%) respectively. The extension of the models shows further improvement in STOI and PESQ in seen and unseen noisy conditions. • Proposed Deep Models with LSTM, GRU, and SRU Model for Speech Enhancement. • To Capture Temporal Sequences, LSTM, GRU, and SRU are applied to extracted Spectral Features. • Attention Gates are added to the Skip Connections to Focus on Important Spectral Features. • The Proposed SE Models are Evaluated on two datasets (WSJ0 and VoiceBank+DEMAND). [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
01676393
Volume :
156
Database :
Academic Search Index
Journal :
Speech Communication
Publication Type :
Academic Journal
Accession number :
174759306
Full Text :
https://doi.org/10.1016/j.specom.2023.103008