Back to Search Start Over

Fine-grained Visual-Text Prompt-Driven Self-Training for Open-Vocabulary Object Detection

Authors :
Long, Yanxin
Han, Jianhua
Huang, Runhui
Hang, Xu
Zhu, Yi
Xu, Chunjing
Liang, Xiaodan
Publication Year :
2022

Abstract

Inspired by the success of vision-language methods (VLMs) in zero-shot classification, recent works attempt to extend this line of work into object detection by leveraging the localization ability of pre-trained VLMs and generating pseudo labels for unseen classes in a self-training manner. However, since the current VLMs are usually pre-trained with aligning sentence embedding with global image embedding, the direct use of them lacks fine-grained alignment for object instances, which is the core of detection. In this paper, we propose a simple but effective fine-grained Visual-Text Prompt-driven self-training paradigm for Open-Vocabulary Detection (VTP-OVD) that introduces a fine-grained visual-text prompt adapting stage to enhance the current self-training paradigm with a more powerful fine-grained alignment. During the adapting stage, we enable VLM to obtain fine-grained alignment by using learnable text prompts to resolve an auxiliary dense pixel-wise prediction task. Furthermore, we propose a visual prompt module to provide the prior task information (i.e., the categories need to be predicted) for the vision branch to better adapt the pre-trained VLM to the downstream tasks. Experiments show that our method achieves the state-of-the-art performance for open-vocabulary object detection, e.g., 31.5% mAP on unseen classes of COCO.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2211.00849
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TNNLS.2023.3293484