Back to Search Start Over

The Labeled Multiple Canonical Correlation Analysis for Information Fusion

Authors :
Gao, Lei
Zhang, Rui
Qi, Lin
Chen, Enqing
Guan, Ling
Source :
IEEE Transactions on Multimedia, 2019
Publication Year :
2021

Abstract

The objective of multimodal information fusion is to mathematically analyze information carried in different sources and create a new representation which will be more effectively utilized in pattern recognition and other multimedia information processing tasks. In this paper, we introduce a new method for multimodal information fusion and representation based on the Labeled Multiple Canonical Correlation Analysis (LMCCA). By incorporating class label information of the training samples,the proposed LMCCA ensures that the fused features carry discriminative characteristics of the multimodal information representations, and are capable of providing superior recognition performance. We implement a prototype of LMCCA to demonstrate its effectiveness on handwritten digit recognition,face recognition and object recognition utilizing multiple features,bimodal human emotion recognition involving information from both audio and visual domains. The generic nature of LMCCA allows it to take as input features extracted by any means,including those by deep learning (DL) methods. Experimental results show that the proposed method enhanced the performance of both statistical machine learning (SML) methods, and methods based on DL.

Details

Database :
arXiv
Journal :
IEEE Transactions on Multimedia, 2019
Publication Type :
Report
Accession number :
edsarx.2103.00359
Document Type :
Working Paper
Full Text :
https://doi.org/10.1109/TMM.2018.2859590