Back to Search Start Over

Developing Pretrained Language Models for Turkish Biomedical Domain

Authors :
Hazal Turkmen
Oguz Dikenelli
Cenk Eraslan
Mehmet Cem Calli
Suha Sureyya Ozbek
Publication Year :
2022
Publisher :
Ieee, 2022.

Abstract

10th IEEE International Conference on Healthcare Informatics (IEEE ICHI) -- JUN 11-14, 2022 -- Rochester, MN<br />Pretrained language models elevated with in-domain corpora show impressive results in biomedicine and clinical NLP tasks in English. However, there is minimal work in low-resource languages. This work introduces the BioBERTurk family, three pretrained models in Turkish for biomedicine. To evaluate models, we also introduce a labeled dataset to classify radiology reports of CT exams. Our first model was initialized from BERTurk and pretrained with biomedical corpus. The second model again continues to pretrain the general BERT model with a corpus of Ph.D. theses on radiology to test the effect of the task-related text. The final model combines radiology and biomedicine corpora with the corpus of BERTurk and pretrained a BERT model from scratch. F-scores of our models in the radiology resort classification are 92.99, 92.75, and 89.49 respectively. As far as we know, this is the first model that evaluates the effect of small size in-domain corpus in pretraining from scratch.<br />IEEE,Mayo Clin, Dept AI & Informat,Mayo Clin, Robert D & Patricia E Kern Ctr Sci Hlth Care Delivery,Mayo Clin Platform,NSF,IEEE Comp Soc Tech Comm Intelligent Informat,Journal Healthcare Informat Res,Hlth Data Sci<br />Tensorflow Research Cloud (TRC)<br />We would like to acknowledge the support we received from the Tensorflow Research Cloud (TRC) team 4 in providing access to TPUv3 units.

Details

Language :
English
Database :
OpenAIRE
Accession number :
edsair.doi.dedup.....7e6e4d5ca07557cce60bc853047e3f04