Back to Search Start Over

Harnessing large language models' zero-shot and few-shot learning capabilities for regulatory research.

Authors :
Meshkin, Hamed
Zirkle, Joel
Arabidarrehdor, Ghazal
Chaturbedi, Anik
Chakravartula, Shilpa
Mann, John
Thrasher, Bradlee
Li, Zhihua
Source :
Briefings in Bioinformatics; Sep2024, Vol. 25 Issue 5, p1-11, 11p
Publication Year :
2024

Abstract

Large language models (LLMs) are sophisticated AI-driven models trained on vast sources of natural language data. They are adept at generating responses that closely mimic human conversational patterns. One of the most notable examples is OpenAI's ChatGPT, which has been extensively used across diverse sectors. Despite their flexibility, a significant challenge arises as most users must transmit their data to the servers of companies operating these models. Utilizing ChatGPT or similar models online may inadvertently expose sensitive information to the risk of data breaches. Therefore, implementing LLMs that are open source and smaller in scale within a secure local network becomes a crucial step for organizations where ensuring data privacy and protection has the highest priority, such as regulatory agencies. As a feasibility evaluation, we implemented a series of open-source LLMs within a regulatory agency's local network and assessed their performance on specific tasks involving extracting relevant clinical pharmacology information from regulatory drug labels. Our research shows that some models work well in the context of few- or zero-shot learning, achieving performance comparable, or even better than, neural network models that needed thousands of training samples. One of the models was selected to address a real-world issue of finding intrinsic factors that affect drugs' clinical exposure without any training or fine-tuning. In a dataset of over 700 000 sentences, the model showed a 78.5% accuracy rate. Our work pointed to the possibility of implementing open-source LLMs within a secure local network and using these models to perform various natural language processing tasks when large numbers of training examples are unavailable. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
14675463
Volume :
25
Issue :
5
Database :
Complementary Index
Journal :
Briefings in Bioinformatics
Publication Type :
Academic Journal
Accession number :
179874060
Full Text :
https://doi.org/10.1093/bib/bbae354