Back to Search Start Over

A hybrid technique for speech segregation and classification using a sophisticated deep neural network.

Authors :
Qazi, Khurram Ashfaq
Nawaz, Tabassam
Mehmood, Zahid
Rashid, Muhammad
Habib, Hafiz Adnan
Source :
PLoS ONE; 3/20/2018, Vol. 13 Issue 3, p1-15, 15p
Publication Year :
2018

Abstract

Recent research on speech segregation and music fingerprinting has led to improvements in speech segregation and music identification algorithms. Speech and music segregation generally involves the identification of music followed by speech segregation. However, music segregation becomes a challenging task in the presence of noise. This paper proposes a novel method of speech segregation for unlabelled stationary noisy audio signals using the deep belief network (DBN) model. The proposed method successfully segregates a music signal from noisy audio streams. A recurrent neural network (RNN)-based hidden layer segregation model is applied to remove stationary noise. Dictionary-based fisher algorithms are employed for speech classification. The proposed method is tested on three datasets (TIMIT, MIR-1K, and MusicBrainz), and the results indicate the robustness of proposed method for speech segregation. The qualitative and quantitative analysis carried out on three datasets demonstrate the efficiency of the proposed method compared to the state-of-the-art speech segregation and classification-based methods. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
19326203
Volume :
13
Issue :
3
Database :
Complementary Index
Journal :
PLoS ONE
Publication Type :
Academic Journal
Accession number :
128574166
Full Text :
https://doi.org/10.1371/journal.pone.0194151