Back to Search Start Over

Adversarial Symmetric Variational Autoencoder

Authors :
Pu, Yunchen
Wang, Weiyao
Henao, Ricardo
Chen, Liqun
Gan, Zhe
Li, Chunyuan
Carin, Lawrence
Publication Year :
2017

Abstract

A new form of variational autoencoder (VAE) is developed, in which the joint distribution of data and codes is considered in two (symmetric) forms: ($i$) from observed data fed through the encoder to yield codes, and ($ii$) from latent codes drawn from a simple prior and propagated through the decoder to manifest data. Lower bounds are learned for marginal log-likelihood fits observed data and latent codes. When learning with the variational bound, one seeks to minimize the symmetric Kullback-Leibler divergence of joint density functions from ($i$) and ($ii$), while simultaneously seeking to maximize the two marginal log-likelihoods. To facilitate learning, a new form of adversarial training is developed. An extensive set of experiments is performed, in which we demonstrate state-of-the-art data reconstruction and generation on several image benchmark datasets.<br />Comment: Accepted to NIPS 2017

Subjects

Subjects :
Computer Science - Learning

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.1711.04915
Document Type :
Working Paper