Back to Search Start Over

Exploring the potential of prototype-based soft-labels data distillation for imbalanced data classification

Authors :
Rosu, Radu-Andrei
Breaban, Mihaela-Elena
Luchian, Henri
Source :
24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), pp. 173-180, 2022. IEEE
Publication Year :
2024

Abstract

Dataset distillation aims at synthesizing a dataset by a small number of artificially generated data items, which, when used as training data, reproduce or approximate a machine learning (ML) model as if it were trained on the entire original dataset. Consequently, data distillation methods are usually tied to a specific ML algorithm. While recent literature deals mainly with distillation of large collections of images in the context of neural network models, tabular data distillation is much less represented and mainly focused on a theoretical perspective. The current paper explores the potential of a simple distillation technique previously proposed in the context of Less-than-one shot learning. The main goal is to push further the performance of prototype-based soft-labels distillation in terms of classification accuracy, by integrating optimization steps in the distillation process. The analysis is performed on real-world data sets with various degrees of imbalance. Experimental studies trace the capability of the method to distill the data, but also the opportunity to act as an augmentation method, i.e. to generate new data that is able to increase model accuracy when used in conjunction with - as opposed to instead of - the original data.

Details

Database :
arXiv
Journal :
24th International Symposium on Symbolic and Numeric Algorithms for Scientific Computing (SYNASC), pp. 173-180, 2022. IEEE
Publication Type :
Report
Accession number :
edsarx.2403.17130
Document Type :
Working Paper