Back to Search Start Over

Harnessing the Power of LLMs: Automating Unit Test Generation for High-Performance Computing

Authors :
Karanjai, Rabimba
Hussain, Aftab
Rabin, Md Rafiqul Islam
Xu, Lei
Shi, Weidong
Alipour, Mohammad Amin
Publication Year :
2024

Abstract

Unit testing is crucial in software engineering for ensuring quality. However, it's not widely used in parallel and high-performance computing software, particularly scientific applications, due to their smaller, diverse user base and complex logic. These factors make unit testing challenging and expensive, as it requires specialized knowledge and existing automated tools are often ineffective. To address this, we propose an automated method for generating unit tests for such software, considering their unique features like complex logic and parallel processing. Recently, large language models (LLMs) have shown promise in coding and testing. We explored the capabilities of Davinci (text-davinci-002) and ChatGPT (gpt-3.5-turbo) in creating unit tests for C++ parallel programs. Our results show that LLMs can generate mostly correct and comprehensive unit tests, although they have some limitations, such as repetitive assertions and blank test cases.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.05202
Document Type :
Working Paper