Back to Search Start Over

Video OWL-ViT: Temporally-consistent open-world localization in video

Authors :
Heigold, Georg
Minderer, Matthias
Gritsenko, Alexey
Bewley, Alex
Keysers, Daniel
Lučić, Mario
Yu, Fisher
Kipf, Thomas
Publication Year :
2023

Abstract

We present an architecture and a training recipe that adapts pre-trained open-world image models to localization in videos. Understanding the open visual world (without being constrained by fixed label spaces) is crucial for many real-world vision tasks. Contrastive pre-training on large image-text datasets has recently led to significant improvements for image-level tasks. For more structured tasks involving object localization applying pre-trained models is more challenging. This is particularly true for video tasks, where task-specific data is limited. We show successful transfer of open-world models by building on the OWL-ViT open-vocabulary detection model and adapting it to video by adding a transformer decoder. The decoder propagates object representations recurrently through time by using the output tokens for one frame as the object queries for the next. Our model is end-to-end trainable on video data and enjoys improved temporal consistency compared to tracking-by-detection baselines, while retaining the open-world capabilities of the backbone detector. We evaluate our model on the challenging TAO-OW benchmark and demonstrate that open-world capabilities, learned from large-scale image-text pre-training, can be transferred successfully to open-world localization across diverse videos.<br />Comment: ICCV 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2308.11093
Document Type :
Working Paper