Back to Search
Start Over
Data-Efficient Language-Supervised Zero-Shot Learning with Self-Distillation
- Source :
- CVPR Workshops
- Publication Year :
- 2021
- Publisher :
- IEEE, 2021.
-
Abstract
- Traditional computer vision models are trained to predict a fixed set of predefined categories. Recently, natural language has been shown to be a broader and richer source of supervision that provides finer descriptions to visual concepts than supervised "gold" labels. Previous works, such as CLIP, use a simple pretraining task of predicting the pairings between images and text captions. CLIP, however, is data hungry and requires more than 400M image text pairs for training. We propose a data-efficient contrastive distillation method that uses soft labels to learn from noisy image-text pairs. Our model transfers knowledge from pretrained image and sentence encoders and achieves strong performance with only 3M image text pairs, 133x smaller than CLIP. Our method exceeds the previous SoTA of general zero-shot learning on ImageNet 21k+1k by 73% relatively with a ResNet50 image encoder and DeCLUTR text encoder. We also beat CLIP by 10.5% relatively on zero-shot evaluation on Google Open Images (19,958 classes).<br />Comment: 4 pages, 1 figure
- Subjects :
- FOS: Computer and information sciences
business.industry
Computer science
Computer Vision and Pattern Recognition (cs.CV)
Speech recognition
Computer Science - Computer Vision and Pattern Recognition
ComputingMethodologies_IMAGEPROCESSINGANDCOMPUTERVISION
Image (mathematics)
Visualization
Set (abstract data type)
Task (computing)
Pattern recognition (psychology)
Artificial intelligence
business
Encoder
Natural language
Sentence
Subjects
Details
- Database :
- OpenAIRE
- Journal :
- 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)
- Accession number :
- edsair.doi.dedup.....e2cd736eb04712815fde6628ccb0d70a