Back to Search Start Over

Generalizing Multimodal Variational Methods to Sets

Authors :
Zhou, Jinzhao
Duan, Yiqun
Chen, Zhihong
Chang, Yu-Cheng
Lin, Chin-Teng
Publication Year :
2022

Abstract

Making sense of multiple modalities can yield a more comprehensive description of real-world phenomena. However, learning the co-representation of diverse modalities is still a long-standing endeavor in emerging machine learning applications and research. Previous generative approaches for multimodal input approximate a joint-modality posterior by uni-modality posteriors as product-of-experts (PoE) or mixture-of-experts (MoE). We argue that these approximations lead to a defective bound for the optimization process and loss of semantic connection among modalities. This paper presents a novel variational method on sets called the Set Multimodal VAE (SMVAE) for learning a multimodal latent space while handling the missing modality problem. By modeling the joint-modality posterior distribution directly, the proposed SMVAE learns to exchange information between multiple modalities and compensate for the drawbacks caused by factorization. In public datasets of various domains, the experimental results demonstrate that the proposed method is applicable to order-agnostic cross-modal generation while achieving outstanding performance compared to the state-of-the-art multimodal methods. The source code for our method is available online https://anonymous.4open.science/r/SMVAE-9B3C/.<br />Comment: First Submission

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2212.09918
Document Type :
Working Paper