Back to Search Start Over

Assessment of chemistry knowledge in large language models that generate code.

Authors :
White AD
Hocky GM
Gandhi HA
Ansari M
Cox S
Wellawatte GP
Sasmal S
Yang Z
Liu K
Singh Y
Peña Ccoa WJ
Source :
Digital discovery [Digit Discov] 2023 Jan 26; Vol. 2 (2), pp. 368-376. Date of Electronic Publication: 2023 Jan 26 (Print Publication: 2023).
Publication Year :
2023

Abstract

In this work, we investigate the question: do code-generating large language models know chemistry? Our results indicate, mostly yes. To evaluate this, we introduce an expandable framework for evaluating chemistry knowledge in these models, through prompting models to solve chemistry problems posed as coding tasks. To do so, we produce a benchmark set of problems, and evaluate these models based on correctness of code by automated testing and evaluation by experts. We find that recent LLMs are able to write correct code across a variety of topics in chemistry and their accuracy can be increased by 30 percentage points via prompt engineering strategies, like putting copyright notices at the top of files. Our dataset and evaluation tools are open source which can be contributed to or built upon by future researchers, and will serve as a community resource for evaluating the performance of new models as they emerge. We also describe some good practices for employing LLMs in chemistry. The general success of these models demonstrates that their impact on chemistry teaching and research is poised to be enormous.<br />Competing Interests: After submission of this manuscript, A. D. W. worked as a paid consultant for OpenAI, the developers of some of the models presented in this work.<br /> (This journal is © The Royal Society of Chemistry.)

Details

Language :
English
ISSN :
2635-098X
Volume :
2
Issue :
2
Database :
MEDLINE
Journal :
Digital discovery
Publication Type :
Academic Journal
Accession number :
37065678
Full Text :
https://doi.org/10.1039/d2dd00087c