Back to Search Start Over

Improving Cross-Domain Low-Resource Text Generation through LLM Post-Editing: A Programmer-Interpreter Approach

Authors :
Li, Zhuang
Haroutunian, Levon
Tumuluri, Raj
Cohen, Philip
Haffari, Gholamreza
Publication Year :
2024

Abstract

Post-editing has proven effective in improving the quality of text generated by large language models (LLMs) such as GPT-3.5 or GPT-4, particularly when direct updating of their parameters to enhance text quality is infeasible or expensive. However, relying solely on smaller language models for post-editing can limit the LLMs' ability to generalize across domains. Moreover, the editing strategies in these methods are not optimally designed for text-generation tasks. To address these limitations, we propose a neural programmer-interpreter approach that preserves the domain generalization ability of LLMs when editing their output. The editing actions in this framework are specifically devised for text generation. Extensive experiments demonstrate that the programmer-interpreter significantly enhances GPT-3.5's performance in logical form-to-text conversion and low-resource machine translation, surpassing other state-of-the-art (SOTA) LLM post-editing methods in cross-domain settings.<br />Comment: EACL 2024 (findings), short paper, 5 pages

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2402.04609
Document Type :
Working Paper