Back to Search Start Over

Can Large Language Models Write Parallel Code?

Authors :
Nichols, Daniel
Davis, Joshua H.
Xie, Zhaojun
Rajaram, Arjun
Bhatele, Abhinav
Source :
The 33rd International Symposium on High-Performance Parallel and Distributed Computing (HPDC '24), June 3-7, 2024, Pisa, Italy. ACM, New York, NY, USA, 14 pages
Publication Year :
2024

Abstract

Large language models are increasingly becoming a popular tool for software development. Their ability to model and generate source code has been demonstrated in a variety of contexts, including code completion, summarization, translation, and lookup. However, they often struggle to generate code for complex programs. In this paper, we study the capabilities of state-of-the-art language models to generate parallel code. In order to evaluate language models, we create a benchmark, ParEval, consisting of prompts that represent 420 different coding tasks related to scientific and parallel computing. We use ParEval to evaluate the effectiveness of several state-of-the-art open- and closed-source language models on these tasks. We introduce novel metrics for evaluating the performance of generated code, and use them to explore how well each large language model performs for 12 different computational problem types and six different parallel programming models.

Details

Database :
arXiv
Journal :
The 33rd International Symposium on High-Performance Parallel and Distributed Computing (HPDC '24), June 3-7, 2024, Pisa, Italy. ACM, New York, NY, USA, 14 pages
Publication Type :
Report
Accession number :
edsarx.2401.12554
Document Type :
Working Paper
Full Text :
https://doi.org/10.1145/3625549.3658689