Back to Search Start Over

PathologyBERT -- Pre-trained Vs. A New Transformer Language Model for Pathology Domain

Authors :
Santos, Thiago
Tariq, Amara
Das, Susmita
Vayalpati, Kavyasree
Smith, Geoffrey H.
Trivedi, Hari
Banerjee, Imon
Publication Year :
2022

Abstract

Pathology text mining is a challenging task given the reporting variability and constant new findings in cancer sub-type definitions. However, successful text mining of a large pathology database can play a critical role to advance 'big data' cancer research like similarity-based treatment selection, case identification, prognostication, surveillance, clinical trial screening, risk stratification, and many others. While there is a growing interest in developing language models for more specific clinical domains, no pathology-specific language space exist to support the rapid data-mining development in pathology space. In literature, a few approaches fine-tuned general transformer models on specialized corpora while maintaining the original tokenizer, but in fields requiring specialized terminology, these models often fail to perform adequately. We propose PathologyBERT - a pre-trained masked language model which was trained on 347,173 histopathology specimen reports and publicly released in the Huggingface repository. Our comprehensive experiments demonstrate that pre-training of transformer model on pathology corpora yields performance improvements on Natural Language Understanding (NLU) and Breast Cancer Diagnose Classification when compared to nonspecific language models.<br />Comment: submitted to "American Medical Informatics Association (AMIA)" 2022 Annual Symposium

Details

Database :
arXiv
Publication Type :
Report
Accession number :
edsarx.2205.06885
Document Type :
Working Paper