Back to Search Start Over

Will the Inclusion of Generated Data Amplify Bias Across Generations in Future Image Classification Models?

Authors :
Zhang, Zeliang
Liang, Xin
Feng, Mingqian
Liang, Susan
Xu, Chenliang
Publication Year :
2024

Abstract

As the demand for high-quality training data escalates, researchers have increasingly turned to generative models to create synthetic data, addressing data scarcity and enabling continuous model improvement. However, reliance on self-generated data introduces a critical question: Will this practice amplify bias in future models? While most research has focused on overall performance, the impact on model bias, particularly subgroup bias, remains underexplored. In this work, we investigate the effects of the generated data on image classification tasks, with a specific focus on bias. We develop a practical simulation environment that integrates a self-consuming loop, where the generative model and classification model are trained synergistically. Hundreds of experiments are conducted on Colorized MNIST, CIFAR-20/100, and Hard ImageNet datasets to reveal changes in fairness metrics across generations. In addition, we provide a conjecture to explain the bias dynamics when training models on continuously augmented datasets across generations. Our findings contribute to the ongoing debate on the implications of synthetic data for fairness in real-world applications.<br />Comment: 15 pages, 7 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2410.10160
Document Type :
Working Paper