Back to Search Start Over

Visual Speech Recognition for Multiple Languages in the Wild

Authors :
Ma, Pingchuan
Petridis, Stavros
Pantic, Maja
Publication Year :
2022

Abstract

Visual speech recognition (VSR) aims to recognize the content of speech based on lip movements, without relying on the audio stream. Advances in deep learning and the availability of large audio-visual datasets have led to the development of much more accurate and robust VSR models than ever before. However, these advances are usually due to the larger training sets rather than the model design. Here we demonstrate that designing better models is equally as important as using larger training sets. We propose the addition of prediction-based auxiliary tasks to a VSR model, and highlight the importance of hyperparameter optimization and appropriate data augmentations. We show that such a model works for different languages and outperforms all previous methods trained on publicly available datasets by a large margin. It even outperforms models that were trained on non-publicly available datasets containing up to to 21 times more data. We show, furthermore, that using additional training data, even in other languages or with automatically generated transcriptions, results in further improvement.<br />Comment: Published in Nature Machine Intelligence

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2202.13084
Document Type :
Working Paper
Full Text :
https://doi.org/10.1038/s42256-022-00550-z