1. Learning the Relation Between Code Features and Code Transforms With Structured Prediction
- Author
-
Zhongxing Yu, Matias Martinez, Zimin Chen, Tegawendé F. Bissyandé, Martin Monperrus, Universitat Politècnica de Catalunya. Departament d'Enginyeria de Serveis i Sistemes d'Informació, and Universitat Politècnica de Catalunya. inSSIDE - integrated Software, Services, Information and Data Engineering
- Subjects
FOS: Computer and information sciences ,Programari -- Verificació ,Computer Science - Machine Learning ,Computer Science - Programming Languages ,Informàtica::Enginyeria del software [Àrees temàtiques de la UPC] ,Program repair ,Machine Learning (cs.LG) ,Software Engineering (cs.SE) ,Computer Science - Software Engineering ,Computer software -- Verification ,Machine learning ,Aprenentatge automàtic ,Big code ,Code transform ,Software ,Programming Languages (cs.PL) - Abstract
To effectively guide the exploration of the code transform space for automated code evolution techniques, we present in this paper the first approach for structurally predicting code transforms at the level of AST nodes using conditional random fields (CRFs). Our approach first learns offline a probabilistic model that captures how certain code transforms are applied to certain AST nodes, and then uses the learned model to predict transforms for arbitrary new, unseen code snippets. Our approach involves a novel representation of both programs and code transforms. Specifically, we introduce the formal framework for defining the so-called AST-level code transforms and we demonstrate how the CRF model can be accordingly designed, learned, and used for prediction. We instantiate our approach in the context of repair transform prediction for Java programs. Our instantiation contains a set of carefully designed code features, deals with the training data imbalance issue, and comprises transform constraints that are specific to code. We conduct a large-scale experimental evaluation based on a dataset of bug fixing commits from real-world Java projects. The results show that when the popular evaluation metric top-3 is used, our approach predicts the code transforms with an accuracy varying from 41% to 53% depending on the transforms. Our model outperforms two baselines based on history probability and neural machine translation (NMT), suggesting the importance of considering code structure in achieving good prediction accuracy. In addition, a proof-of-concept synthesizer is implemented to concretize some repair transforms to get the final patches. The evaluation of the synthesizer on the Defects4j benchmark confirms the usefulness of the predicted AST-level repair transforms in producing high-quality patches. This work was partially supported by National Natural Science Foundation of China (Grant No. 62102233), Shandong Province Overseas Outstanding Youth Fund (Grant No. 2022HWYQ-043), Qilu Young Scholar Program of Shandong University, and Wallenberg Artificial Intelligence, the Wallenberg Autonomous Systems and Software Program (WASP) funded by Knut and Alice Wallenberg Foundation. Some experiments were performed on resources provided by the Swedish National Infrastructure for Computing.
- Published
- 2023