Back to Search Start Over

Large language models, physics-based modeling, experimental measurements: the trinity of data-scarce learning of polymer properties

Authors :
Liu, Ning
Jafarzadeh, Siavash
Lattimer, Brian Y.
Ni, Shuna
Lua, Jim
Yu, Yue
Publication Year :
2024

Abstract

Large language models (LLMs) bear promise as a fast and accurate material modeling paradigm for evaluation, analysis, and design. Their vast number of trainable parameters necessitates a wealth of data to achieve accuracy and mitigate overfitting. However, experimental measurements are often limited and costly to obtain in sufficient quantities for finetuning. To this end, we present a physics-based training pipeline that tackles the pathology of data scarcity. The core enabler is a physics-based modeling framework that generates a multitude of synthetic data to align the LLM to a physically consistent initial state before finetuning. Our framework features a two-phase training strategy: (1) utilizing the large-in-amount while less accurate synthetic data for supervised pretraining, and (2) finetuning the phase-1 model with limited experimental data. We empirically demonstrate that supervised pretraining is vital to obtaining accurate finetuned LLMs, via the lens of learning polymer flammability metrics where cone calorimeter data is sparse.

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2407.02770
Document Type :
Working Paper