Back to Search Start Over

Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement

Authors :
Xi, Zhiheng
Jin, Senjie
Zhou, Yuhao
Zheng, Rui
Gao, Songyang
Gui, Tao
Zhang, Qi
Huang, Xuanjing
Publication Year :
2023

Abstract

Prompting methods such as Chain-of-Thought (CoT) have shed new light on enhancing the reasoning capabilities of large language models, and researchers have extensively explored the generation process of rationales and answers. However, they have overlooked the potential challenges posed by the poor quality of reasoning problems, which may influence the reasoning performance significantly. In this work, we propose Self-Polish (SP), a novel method that facilitates the model's problem-solving process by prompting them to progressively refine the given problems to be more comprehensible and solvable. Specifically, the method teaches models to eliminate irrelevant information, rearrange the logic structure and organize local conditions into new ones parallelly. SP is orthogonal to all other prompting methods, making it convenient to integrate with state-of-the-art techniques for further improvement. We conduct thorough experiments on five benchmarks to illustrate the effectiveness of the proposed method. For example, with Text-davinci-003, our method boosts the performance of standard few-shot prompting by $8.0\%$ on GSM8K and $17.8\%$ on MultiArith; it also improves the performance of CoT by $6.0\%$ on GSM8K and $6.0\%$ on MathQA, respectively. Furthermore, our method also showcases impressive performance on robustness evaluation.<br />Preprint

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....e7b470143a29dc8b5f3b17f408dc7b5e