Back to Search Start Over

LLMs Could Autonomously Learn Without External Supervision

Authors :
Ji, Ke
Chen, Junying
Gao, Anningzhe
Xie, Wenya
Wan, Xiang
Wang, Benyou
Publication Year :
2024

Abstract

In the quest for super-human performance, Large Language Models (LLMs) have traditionally been tethered to human-annotated datasets and predefined training objectives-a process that is both labor-intensive and inherently limited. This paper presents a transformative approach: Autonomous Learning for LLMs, a self-sufficient learning paradigm that frees models from the constraints of human supervision. This method endows LLMs with the ability to self-educate through direct interaction with text, akin to a human reading and comprehending literature. Our approach eliminates the reliance on annotated data, fostering an Autonomous Learning environment where the model independently identifies and reinforces its knowledge gaps. Empirical results from our comprehensive experiments, which utilized a diverse array of learning materials and were evaluated against standard public quizzes, reveal that Autonomous Learning outstrips the performance of both Pre-training and Supervised Fine-Tuning (SFT), as well as retrieval-augmented methods. These findings underscore the potential of Autonomous Learning to not only enhance the efficiency and effectiveness of LLM training but also to pave the way for the development of more advanced, self-reliant AI systems.<br />Comment: 20 pages, 8 figures

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2406.00606
Document Type :
Working Paper