Back to Search Start Over

Network embedding by fusing multimodal contents and links.

Authors :
Huang, Feiran
Zhang, Xiaoming
Xu, Jie
Li, Chaozhuo
Li, Zhoujun
Source :
Knowledge-Based Systems. May2019, Vol. 171, p44-55. 12p.
Publication Year :
2019

Abstract

Abstract Embedding the network into a low-dimensional space has attracted extensive research interest as well as boomed a lot of applications, such as node classification and link prediction. Most existing methods learn the network embedding simply from the network structure. However, the social media data, such as social images, usually contain both multimodal contents (e.g., visual content and text description) and social links among the images. To address this problem, we propose a novel model Attention-based Multi-view Variational Auto-Encoder (AMVAE) to fuse both the links and the multimodal contents for more effectively and efficiently network embedding. Specifically, Bi-LSTM (bidirectional long short-term memory) with attention model is proposed to capture the fine-granularity correlation between different data modalities, such as some words are reflected by specific visual regions. A joint representation of the multimodal contents is accordingly learned. Then, the network structure information and the learned representation for the multimodal contents are considered as two views. To fuse the two views, a multi-view correlation learning based Variational Auto-Encoder (VAE) is proposed to learn the representation of each node. By jointly optimizing the two components into a holistic learning framework, the embedding of network structure and multimodal contents are integrated and mutually reinforced. Experiments on three real-world datasets demonstrate the superiority of the proposed model in two applications, i.e., multi-label classification and link prediction. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
09507051
Volume :
171
Database :
Academic Search Index
Journal :
Knowledge-Based Systems
Publication Type :
Academic Journal
Accession number :
135256041
Full Text :
https://doi.org/10.1016/j.knosys.2019.02.003