Back to Search Start Over

The performance of the LSTM-based code generated by Large Language Models (LLMs) in forecasting time series data

Authors :
Saroj Gopali
Sima Siami-Namini
Faranak Abri
Akbar Siami Namin
Source :
Natural Language Processing Journal, Vol 9, Iss , Pp 100120- (2024)
Publication Year :
2024
Publisher :
Elsevier, 2024.

Abstract

Generative AI, and in particular Large Language Models (LLMs), have gained substantial momentum due to their wide applications in various disciplines. While the use of these game changing technologies in generating textual information has already been demonstrated in several application domains, their abilities in generating complex models and executable codes need to be explored. As an intriguing case is the goodness of the machine and deep learning models generated by these LLMs in conducting automated scientific data analysis, where a data analyst may not have enough expertise in manually coding and optimizing complex deep learning models and codes and thus may opt to leverage LLMs to generate the required models. This paper investigates and compares the performance of the mainstream LLMs, such as ChatGPT, PaLM, LLama, and Falcon, in generating deep learning models for analyzing time series data, an important and popular data type with its prevalent applications in many application domains including financial and stock market. This research conducts a set of controlled experiments where the prompts for generating deep learning-based models are controlled with respect to sensitivity levels of four criteria including (1) Clarify and Specificity, (2) Objective and Intent, (3) Contextual Information, and (4) Format and Style. While the results are relatively mix, we observe some distinct patterns. We notice that using LLMs, we are able to generate deep learning-based models with executable codes for each dataset separately whose performance are comparable with the manually crafted and optimized LSTM models for predicting the whole time series dataset. We also noticed that ChatGPT outperforms the other LLMs in generating more accurate models. Furthermore, we observed that the goodness of the generated models vary with respect to the “temperature” parameter used in configuring LLMS. The results can be beneficial for data analysts and practitioners who would like to leverage generative AIs to produce good prediction models with acceptable goodness.

Details

Language :
English
ISSN :
29497191
Volume :
9
Issue :
100120-
Database :
Directory of Open Access Journals
Journal :
Natural Language Processing Journal
Publication Type :
Academic Journal
Accession number :
edsdoj.ba3c1c25b6e046f184d662dbacd2799e
Document Type :
article
Full Text :
https://doi.org/10.1016/j.nlp.2024.100120