Back to Search Start Over

Bug-Transformer: Automated Program Repair Using Attention-Based Deep Neural Network.

Authors :
Yao, Jie
Rao, Bingbing
Xing, Weiwei
Wang, Liqiang
Source :
Journal of Circuits, Systems & Computers; 2022, Vol. 31 Issue 12, p1-26, 26p
Publication Year :
2022

Abstract

In this paper, we propose a novel transformer-based deep neural network model to learn semantic bug patterns from a corpus of buggy/fixed codes, then generate correct ones automatically. Transformer is a deep learning model relying entirely on attention mechanism to model global dependencies between input and output. Although there are a few endeavors to repair programs by learning neural language models (NLM), many special program properties, such as structure and semantics of an identifier, are not considered in embedding input sequence and designing model effectively, which results in undesired performance. In the proposed Bug-Transformer, we design a novel context abstraction mechanism to better support neural language models. Specifically, it is capable of 1) compressing code information but preserving the key structure and semantics, which provides more thorough information for NLM models, 2) renaming identifiers and literals based on their lexical scopes, structural and semantic information, to reduce code vocabulary size and 3) reserving keywords and selected idioms (domain- or developer-specific vocabularies) for better understanding code structure and semantics. Hence, Bug-Transformer adequately embeds code structural and semantic information into input data and optimize attention-based transformer neural network to well handle code features in order to improve learning tasks for bug repair. We evaluate the performance of the proposed work comprehensively on three datasets (Java code corpora) and generate patches to buggy code using a beam search decoder. The experimental results show that our proposed work outperforms the-state-of-art techniques: Bug-Transformer can successfully predict 54.81%, 34.45%, and 42.40% of the fixed code in these three datasets, respectively, which outperform the baseline models. These success rates steadily increase along with the increase of beam size. Besides, the overall syntactic correctness of all patches remains above 97%, 96%, and 50% on the three benchmarks, respectively, regardless of the beam size. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02181266
Volume :
31
Issue :
12
Database :
Complementary Index
Journal :
Journal of Circuits, Systems & Computers
Publication Type :
Academic Journal
Accession number :
158427975
Full Text :
https://doi.org/10.1142/S0218126622502103