Back to Search Start Over

Neural architecture tuning with policy adaptation.

Authors :
Li, Yanxi
Dong, Minjing
Xu, Yixing
Wang, Yunhe
Xu, Chang
Source :
Neurocomputing. May2022, Vol. 485, p196-204. 9p.
Publication Year :
2022

Abstract

Neural architecture search (NAS) is to automatically design task-specific neural architectures, whose performance has already surpassed those of many manually designed neural networks. Existing NAS techniques focus on searching for the neural architecture and training the optimal network weights from the scratch. Nevertheless, it could be essential to study how to tune a given neural architecture instead of producing a completely new neural architecture in some scenarios, which may lead to a more optimal solution by combining human experience and the advantages of the machine's automatic searching. This paper proposes to learn to tune the architectures at hand to achieve better performance. The proposed Neural Architecture Tuning (NAT) algorithm trains a deep Q-network to tune neural architectures given a random architecture so that we can achieve better performance on a reduced space. We then apply adversarial autoencoder to make the learned policy be generalized to a different searching space in real-world applications. The proposed algorithm is evaluated on the NAS-Bench-101 dataset. The results indicate that our NAT framework can achieve state-of-the-art performance on the NAS-Bench-101 benchmark, and the learned policy can be adapted to a different search space while maintaining the performance. [ABSTRACT FROM AUTHOR]

Subjects

Subjects :
*REINFORCEMENT learning

Details

Language :
English
ISSN :
09252312
Volume :
485
Database :
Academic Search Index
Journal :
Neurocomputing
Publication Type :
Academic Journal
Accession number :
155777379
Full Text :
https://doi.org/10.1016/j.neucom.2021.10.095