Sorry, I don't understand your search. ×
Back to Search Start Over

Re-TASK: Revisiting LLM Tasks from Capability, Skill, and Knowledge Perspectives

Authors :
Wang, Zhihu
Zhao, Shiwan
Wang, Yu
Huang, Heyuan
Xie, Sitao
Zhang, Yubo
Shi, Jiaxin
Wang, Zhixing
Li, Hongyan
Yan, Junchi
Publication Year :
2024

Abstract

The Chain-of-Thought (CoT) paradigm has become a pivotal method for solving complex problems. However, its application to intricate, domain-specific tasks remains challenging, as large language models (LLMs) often struggle to accurately decompose these tasks and, even when decomposition is correct, fail to execute the subtasks effectively. This paper introduces the Re-TASK framework, a novel theoretical model that revisits LLM tasks from the perspectives of capability, skill, and knowledge, drawing on the principles of Bloom's Taxonomy and Knowledge Space Theory. While CoT offers a workflow perspective on tasks, the Re-TASK framework introduces a Chain-of-Learning view, illustrating how tasks and their corresponding subtasks depend on various capability items. Each capability item is further dissected into its constituent aspects of knowledge and skills. Our framework reveals that many CoT failures in domain-specific tasks stem from insufficient knowledge or inadequate skill adaptation. In response, we combine CoT with the Re-TASK framework and implement a carefully designed Re-TASK prompting strategy to improve task performance. Specifically, we identify core capability items linked to tasks and subtasks, then strengthen these capabilities through targeted knowledge injection and skill adaptation. We validate the Re-TASK framework on three datasets across the law, finance, and mathematics domains, achieving significant improvements over the baseline models. Notably, our approach yields a remarkable 44.42% improvement with the Yi-1.5-9B model and a 33.08% improvement with the Llama3-Chinese-8b on the legal dataset. These experimental results confirm the effectiveness of the Re-TASK framework, demonstrating substantial enhancements in both the performance and applicability of LLMs.<br />Comment: Preprint; First three authors contributed equally

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2408.06904
Document Type :
Working Paper