Back to Search Start Over

XtremeAugment: Getting More From Your Data Through Combination of Image Collection and Image Augmentation

Authors :
Sergey Nesteruk
Svetlana Illarionova
Timur Akhtyamov
Dmitrii Shadrin
Andrey Somov
Mariia Pukalchik
Ivan Oseledets
Source :
IEEE Access, Vol 10, Pp 24010-24028 (2022)
Publication Year :
2022
Publisher :
IEEE, 2022.

Abstract

Deep convolutional neural networks are highly efficient for computer vision tasks using plenty of training data. However, there remains a problem of small training datasets. For addressing this problem the training pipeline which handles rare object types and an overall lack of training data to build well-performing models that provide stable predictions is required. This article reports on the comprehensive framework XtremeAugment which provides an easy, reliable, and scalable way to collect image datasets and to efficiently label and augment collected data. The presented framework consists of two augmentation techniques that can be used independently and complement each other when applied together. These are Hardware Dataset Augmentation (HDA) and Object-Based Augmentation (OBA). HDA allows the users to collect more data and spend less time on manual data labeling. OBA significantly increases the training data variability and remains the distribution of the augmented images being close to the original dataset. We assess the proposed approach for the apple spoil segmentation scenario. Our results demonstrate a substantial increase in the model accuracy reaching 0.91 F1-score and outperforming the baseline model for up to 0.62 F1-score for a few-shot learning case in the wild data. The highest benefit of applying XtremeAugment is achieved for the cases where we collect images in the controlled indoor environment, but have to use the model in the wild.

Details

Language :
English
ISSN :
21693536
Volume :
10
Database :
Directory of Open Access Journals
Journal :
IEEE Access
Publication Type :
Academic Journal
Accession number :
edsdoj.7dfc0072490e4b60bd0b6a7515fcecfc
Document Type :
article
Full Text :
https://doi.org/10.1109/ACCESS.2022.3154709