Back to Search Start Over

CurlingNet: Compositional Learning between Images and Text for Fashion IQ Data

Authors :
Yu, Youngjae
Lee, Seunghwan
Choi, Yuncheol
Kim, Gunhee
Publication Year :
2020

Abstract

We present an approach named CurlingNet that can measure the semantic distance of composition of image-text embedding. In order to learn an effective image-text composition for the data in the fashion domain, our model proposes two key components as follows. First, the Delivery makes the transition of a source image in an embedding space. Second, the Sweeping emphasizes query-related components of fashion images in the embedding space. We utilize a channel-wise gating mechanism to make it possible. Our single model outperforms previous state-of-the-art image-text composition models including TIRG and FiLM. We participate in the first fashion-IQ challenge in ICCV 2019, for which ensemble of our model achieves one of the best performances.<br />Comment: 4 pages, 4 figures, ICCV 2019 Linguistics Meets image and video retrieval workshop, Fashion IQ challenge

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2003.12299
Document Type :
Working Paper