Back to Search Start Over

Discriminative multimodal learning via conditional priors in generative models.

Authors :
Mancisidor, Rogelio A.
Kampffmeyer, Michael
Aas, Kjersti
Jenssen, Robert
Source :
Neural Networks. Jan2024, Vol. 169, p417-430. 14p.
Publication Year :
2024

Abstract

Deep generative models with latent variables have been used lately to learn joint representations and generative processes from multi-modal data, which depict an object from different viewpoints. These two learning mechanisms can, however, conflict with each other and representations can fail to embed information on the data modalities. This research studies the realistic scenario in which all modalities and class labels are available for model training, e.g. images or handwriting, but where some modalities and labels required for downstream tasks are missing, e.g. text or annotations. We show, in this scenario, that the variational lower bound limits mutual information between joint representations and missing modalities. We, to counteract these problems, introduce a novel conditional multi-modal discriminative model that uses an informative prior distribution and optimizes a likelihood-free objective function that maximizes mutual information between joint representations and missing modalities. Extensive experimentation demonstrates the benefits of our proposed model, empirical results show that our model achieves state-of-the-art results in representative problems such as downstream classification, acoustic inversion, and image and annotation generation. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
08936080
Volume :
169
Database :
Academic Search Index
Journal :
Neural Networks
Publication Type :
Academic Journal
Accession number :
174322333
Full Text :
https://doi.org/10.1016/j.neunet.2023.10.048