Back to Search Start Over

Factors in Finetuning Deep Model for object detection

Authors :
Ouyang, Wanli
Wang, Xiaogang
Zhang, Cong
Yang, Xiaokang
Publication Year :
2016

Abstract

Finetuning from a pretrained deep model is found to yield state-of-the-art performance for many vision tasks. This paper investigates many factors that influence the performance in finetuning for object detection. There is a long-tailed distribution of sample numbers for classes in object detection. Our analysis and empirical results show that classes with more samples have higher impact on the feature learning. And it is better to make the sample number more uniform across classes. Generic object detection can be considered as multiple equally important tasks. Detection of each class is a task. These classes/tasks have their individuality in discriminative visual appearance representation. Taking this individuality into account, we cluster objects into visually similar class groups and learn deep representations for these groups separately. A hierarchical feature learning scheme is proposed. In this scheme, the knowledge from the group with large number of classes is transferred for learning features in its sub-groups. Finetuned on the GoogLeNet model, experimental results show 4.7% absolute mAP improvement of our approach on the ImageNet object detection dataset without increasing much computational cost at the testing stage.<br />Comment: CVPR2016 camera ready version. Our ImageNet large scale recognition challenge (ILSVRC15) object detection results (rank 3rd for provided data and 2nd for external data) are based on this method. Code available later on http://www.ee.cuhk.edu.hk/~wlouyang/projects/ImageNetFactors/CVPR16.html

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1601.05150
Document Type :
Working Paper