Back to Search Start Over

Towards well-generalizing meta-learning via adversarial task augmentation.

Authors :
Wang, Haoqing
Mai, Huiyu
Gong, Yuhang
Deng, Zhi-Hong
Source :
Artificial Intelligence. Apr2023, Vol. 317, pN.PAG-N.PAG. 1p.
Publication Year :
2023

Abstract

Meta-learning aims to use the knowledge from previous tasks to facilitate the learning of novel tasks. Many meta-learning models elaborately design various task-shared inductive bias, and learn it from a large number of tasks, so the generalization capability of the learned inductive bias depends on the diversity of the training tasks. A common assumption in meta-learning is that the training tasks and the test tasks come from the same or similar task distributions. However, this is usually not strictly satisfied in practice, so meta-learning models need to cope with various novel in-domain or cross-domain tasks. To this end, we propose to use task augmentation to increase the diversity of training tasks, thereby improving the generalization capability of meta-learning models. Concretely, we consider the worst-case problem around the base task distribution, and derive the adversarial task augmentation method which can generate inductive bias-adaptive 'challenging' tasks. Our method can be used as a simple plug-and-play module for various meta-learning models, and improve their generalization capability. We conduct extensive experiments under in-domain and cross-domain few-shot learning and unsupervised few-shot learning settings, and evaluate our method on different types of data (images and text). Experimental results show that our method can effectively improve the generalization capability of various meta-learning models under different settings. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
00043702
Volume :
317
Database :
Academic Search Index
Journal :
Artificial Intelligence
Publication Type :
Academic Journal
Accession number :
162061080
Full Text :
https://doi.org/10.1016/j.artint.2023.103875