Back to Search Start Over

A Deep Dive into Large Language Models for Automated Bug Localization and Repair

Authors :
Hossain, Soneya Binta
Jiang, Nan
Zhou, Qiang
Li, Xiaopeng
Chiang, Wen-Hao
Lyu, Yingjun
Nguyen, Hoan
Tripp, Omer
Publication Year :
2024

Abstract

Large language models (LLMs) have shown impressive effectiveness in various software engineering tasks, including automated program repair (APR). In this study, we take a deep dive into automated bug fixing utilizing LLMs. In contrast to many deep learning-based APR methods that assume known bug locations, rely on line-level localization tools, or address bug prediction and fixing in one step, our approach uniquely employs LLMs to predict bug location at the token level and subsequently utilizes them for bug fixing. This methodological separation of bug localization and fixing using different LLMs enables effective integration of diverse contextual information and improved incorporation of inductive biases. We introduce Toggle: Token-Granulated Bug Localization and Repair, a comprehensive program repair framework that integrates a bug localization model, an adjustment unit, and a bug-fixing model. Toggle takes a buggy function as input and generates a complete corrected function. We investigate various styles of prompting to the bug fixing model to identify the most effective prompts that better utilize the inductive bias and significantly outperform others. Toggle achieves the new state-of-the-art (SOTA) performance on the CodeXGLUE code refinement benchmark, and exhibits better and comparable performance on several other widely-used APR datasets, including Defects4J.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2404.11595
Document Type :
Working Paper