1. Testing the Effect of Code Documentation on Large Language Model Code Understanding
- Author
-
Macke, William and Doyle, Michael
- Subjects
Computer Science - Software Engineering ,Computer Science - Artificial Intelligence ,Computer Science - Computation and Language - Abstract
Large Language Models (LLMs) have demonstrated impressive abilities in recent years with regards to code generation and understanding. However, little work has investigated how documentation and other code properties affect an LLM's ability to understand and generate code or documentation. We present an empirical analysis of how underlying properties of code or documentation can affect an LLM's capabilities. We show that providing an LLM with "incorrect" documentation can greatly hinder code understanding, while incomplete or missing documentation does not seem to significantly affect an LLM's ability to understand code., Comment: 7 pages, 5 figures, 2 tables. Accepted as a Findings paper in the "Generation" track to NAACL 2024. MITRE Public Release Case Number 23-4132
- Published
- 2024