Back to Search Start Over

Music auto-tagging using deep Recurrent Neural Networks.

Authors :
Song, Guangxiao
Wang, Zhijie
Han, Fang
Ding, Shenyi
Iqbal, Muhammad Ather
Source :
Neurocomputing. May2018, Vol. 292, p104-110. 7p.
Publication Year :
2018

Abstract

Musical tags are used to describe music and are cruxes of music information retrieval. Existing methods for music auto-tagging usually consist of preprocessing phase (feature extraction) and machine learning phase. However, the preprocessing phase of most existing method is suffered either information loss or non-sufficient features, while the machine learning phase depends on heavily the feature extracted in the preprocessing phase, lacking the ability to make use of information. To solve this problem, we propose a content-based automatic tagging algorithm using deep Recurrent Neural Network (RNN) with scattering transformed inputs in this paper. Acting as the first phase, scattering transform extracts features from the raw data, meanwhile retains much more information than traditional methods such as mel-frequency cepstral coefficient (MFCC) and mel-frequency spectrogram. Five-layer RNNs with Gated Recurrent Unit (GRU) and sigmoid output layer are used as the second phase of our algorithm, which are extremely powerful machine learning tools capable of making full use of data fed to them. To evaluate the performance of the architecture, we experiment on Magnatagatune dataset using the measurement of the area under the ROC-curve (AUC-ROC). Experimental results show that the tagging performance can be boosted by the proposed method compared with the state-of-the-art models. Additionally, our architecture results in faster training speed and less memory usage. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
292
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
128741959
Full Text :
https://doi.org/10.1016/j.neucom.2018.02.076