Back to Search Start Over

Deep Relative Attributes.

Authors :
Yang, Xiaoshan
Zhang, Tianzhu
Xu, Changsheng
Yan, Shuicheng
Hossain, M. Shamim
Ghoneim, Ahmed
Source :
IEEE Transactions on Multimedia; Sep2016, Vol. 18 Issue 9, p1832-1842, 11p
Publication Year :
2016

Abstract

Relative attribute (RA) learning aims to learn the ranking function describing the relative strength of the attribute. Most of current learning approaches learn a linear ranking function for each attribute by use of the hand-crafted visual features. Different from the existing study, in this paper, we propose a novel deep relative attributes (DRA) algorithm to learn visual features and the effective nonlinear ranking function to describe the RA of image pairs in a unified framework. Here, visual features and the ranking function are learned jointly, and they can benefit each other. The proposed DRA model is comprised of five convolutional neural layers, five fully connected layers, and a relative loss function which contains the contrastive constraint and the similar constraint corresponding to the ordered image pairs and the unordered image pairs, respectively. To train the DRA model effectively, we make use of the transferred knowledge from the large scale visual recognition on ImageNet  <xref ref-type="bibr" rid="ref1">[1]</xref> to the RA learning task. We evaluate the proposed DRA model on three widely used datasets. Extensive experimental results demonstrate that the proposed DRA model consistently and significantly outperforms the state-of-the-art RA learning methods. On the public OSR, PubFig, and Shoes datasets, compared with the previous RA learning results <xref ref-type="bibr" rid="ref2">[2]</xref>, the average ranking accuracies have been significantly improved by about $8\%$, $9\%$, and $14\%$, respectively. [ABSTRACT FROM PUBLISHER]

Details

Language :
English
ISSN :
15209210
Volume :
18
Issue :
9
Database :
Complementary Index
Journal :
IEEE Transactions on Multimedia
Publication Type :
Academic Journal
Accession number :
117445166
Full Text :
https://doi.org/10.1109/TMM.2016.2582379