Back to Search Start Over

Improving Machine Hearing on Limited Data Sets

Authors :
Anna Breger
Zdenek Smekal
Pavol Harar
Roswitha Bammer
Monika Dörfler
Source :
ICUMT
Publication Year :
2019
Publisher :
IEEE, 2019.

Abstract

Convolutional neural network (CNN) architectures have originated and revolutionized machine learning for images. In order to take advantage of CNNs in predictive modeling with audio data, standard FFT-based signal processing methods are often applied to convert the raw audio waveforms into an image-like representations (e.g. spectrograms). Even though conventional images and spectrograms differ in their feature properties, this kind of pre-processing reduces the amount of training data necessary for successful training. In this contribution we investigate how input and target representations interplay with the amount of available training data in a music information retrieval setting. We compare the standard mel-spectrogram inputs with a newly proposed representation, called Mel scattering. Furthermore, we investigate the impact of additional target data representations by using an augmented target loss function which incorporates unused available information. We observe that all proposed methods outperform the standard mel-transform representation when using a limited data set and discuss their strengths and limitations. The source code for reproducibility of our experiments as well as intermediate results and model checkpoints are available in an online repository.<br />Comment: 13 pages, 3 figures, 2 tables. Repository for reproducibility: https://gitlab.com/hararticles/gs-ms-mt/. Keywords: audio, CNN, limited data, Mel scattering, mel-spectrogram, augmented target loss function. Rewritten and restructured after peer revision. Recomputed and added new experiments and visualizations. Changed the presentation of the results

Details

Database :
OpenAIRE
Journal :
2019 11th International Congress on Ultra Modern Telecommunications and Control Systems and Workshops (ICUMT)
Accession number :
edsair.doi.dedup.....c4d9350edaf01b8f1418551c586db510
Full Text :
https://doi.org/10.1109/icumt48472.2019.8970740