Back to Search Start Over

Exploring Continual Learning for Code Generation Models

Authors :
Yadav, Prateek
Sun, Qing
Ding, Hantian
Li, Xiaopeng
Zhang, Dejiao
Tan, Ming
Ma, Xiaofei
Bhatia, Parminder
Nallapati, Ramesh
Ramanathan, Murali Krishna
Bansal, Mohit
Xiang, Bing
Publication Year :
2023

Abstract

Large-scale code generation models such as Codex and CodeT5 have achieved impressive performance. However, libraries are upgraded or deprecated very frequently and re-training large-scale language models is computationally expensive. Therefore, Continual Learning (CL) is an important aspect that remains underexplored in the code domain. In this paper, we introduce a benchmark called CodeTask-CL that covers a wide range of tasks, including code generation, translation, summarization, and refinement, with different input and output programming languages. Next, on our CodeTask-CL benchmark, we compare popular CL techniques from NLP and Vision domains. We find that effective methods like Prompt Pooling (PP) suffer from catastrophic forgetting due to the unstable training of the prompt selection mechanism caused by stark distribution shifts in coding tasks. We address this issue with our proposed method, Prompt Pooling with Teacher Forcing (PP-TF), that stabilizes training by enforcing constraints on the prompt selection mechanism and leads to a 21.54% improvement over Prompt Pooling. Along with the benchmark, we establish a training pipeline that can be used for CL on code models, which we believe can motivate further development of CL methods for code models. Our code is available at https://github.com/amazon-science/codetaskcl-pptf<br />Comment: ACL 2023

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2307.02435
Document Type :
Working Paper