Back to Search Start Over

Data Augmentation with Cross-Modal Variational Autoencoders (DACMVA) for Cancer Survival Prediction.

Authors :
Rajaram, Sara
Mitchell, Cassie S.
Source :
Information (2078-2489). Jan2024, Vol. 15 Issue 1, p7. 14p.
Publication Year :
2024

Abstract

The ability to translate Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs) into different modalities and data types is essential to improve Deep Learning (DL) for predictive medicine. This work presents DACMVA, a novel framework to conduct data augmentation in a cross-modal dataset by translating between modalities and oversampling imputations of missing data. DACMVA was inspired by previous work on the alignment of latent spaces in Autoencoders. DACMVA is a DL data augmentation pipeline that improves the performance in a downstream prediction task. The unique DACMVA framework leverages a cross-modal loss to improve the imputation quality and employs training strategies to enable regularized latent spaces. Oversampling of augmented data is integrated into the prediction training. It is empirically demonstrated that the new DACMVA framework is effective in the often-neglected scenario of DL training on tabular data with continuous labels. Specifically, DACMVA is applied towards cancer survival prediction on tabular gene expression data where there is a portion of missing data in a given modality. DACMVA significantly (p << 0.001, one-sided Wilcoxon signed-rank test) outperformed the non-augmented baseline and competing augmentation methods with varying percentages of missing data (4%, 90%, 95% missing). As such, DACMVA provides significant performance improvements, even in very-low-data regimes, over existing state-of-the-art methods, including TDImpute and oversampling alone. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
20782489
Volume :
15
Issue :
1
Database :
Academic Search Index
Journal :
Information (2078-2489)
Publication Type :
Academic Journal
Accession number :
175078436
Full Text :
https://doi.org/10.3390/info15010007