Back to Search Start Over

Synchformer: Efficient Synchronization from Sparse Cues

Authors :
Iashin, Vladimir
Xie, Weidi
Rahtu, Esa
Zisserman, Andrew
Publication Year :
2024

Abstract

Our objective is audio-visual synchronization with a focus on 'in-the-wild' videos, such as those on YouTube, where synchronization cues can be sparse. Our contributions include a novel audio-visual synchronization model, and training that decouples feature extraction from synchronization modelling through multi-modal segment-level contrastive pre-training. This approach achieves state-of-the-art performance in both dense and sparse settings. We also extend synchronization model training to AudioSet a million-scale 'in-the-wild' dataset, investigate evidence attribution techniques for interpretability, and explore a new capability for synchronization models: audio-visual synchronizability.<br />Comment: Extended version of the ICASSP 24 paper. Project page: https://www.robots.ox.ac.uk/~vgg/research/synchformer/ Code: https://github.com/v-iashin/Synchformer

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2401.16423
Document Type :
Working Paper