Back to Search Start Over

Discriminative multimodal embedding for event classification.

Authors :
Qi, Fan
Yang, Xiaoshan
Zhang, Tianzhu
Xu, Changsheng
Source :
Neurocomputing. Jun2020, Vol. 395, p160-169. 10p.
Publication Year :
2020

Abstract

Most of existing multimodal event classification methods fuse the traditional hand-crafted features with some manually defined weights, which may be not suitable to the event classification task with large amounts of photos. Besides, the feature extraction and event classification model are always performed separately, which cannot capture the most useful features to describe the semantic concepts of complex events. To deal with these issues, we propose a novel discriminative multimodal embedding (DME) model for event classification in user generated photos by jointly learning the representation together with the classifier in a unified framework. In the proposed DME model, we can effectively resolve the multimodal, intra-class variation and inter-class confusion challenges by using the contrastive constraints on the multimodal event data. Extensive experimental results on two collected datasets demonstrate the effectiveness of the proposed DME model for event classification. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09252312
Volume :
395
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
143364384
Full Text :
https://doi.org/10.1016/j.neucom.2017.11.078