1. Limitations of Transformers on Clinical Text Classification
- Author
-
M. Todd Young, Jennifer A. Doherty, Linda Coyle, Xiao-Cheng Wu, Noah Schaefferkoetter, Antoinette M. Stroup, Hong-Jun Yoon, John Gounley, Eric B. Durbin, Georgia D. Tourassi, Shang Gao, and Mohammed Alawad
- Subjects
Computer science ,010501 environmental sciences ,computer.software_genre ,01 natural sciences ,Convolutional neural network ,Article ,Data modeling ,03 medical and health sciences ,Health Information Management ,Humans ,Electrical and Electronic Engineering ,Natural Language Processing ,030304 developmental biology ,0105 earth and related environmental sciences ,0303 health sciences ,Artificial neural network ,business.industry ,Document classification ,Deep learning ,Lexical analysis ,Computer Science Applications ,Task analysis ,Neural Networks, Computer ,Artificial intelligence ,business ,computer ,Encoder ,Natural language processing ,Biotechnology - Abstract
Bidirectional Encoder Representations from Transformers (BERT) and BERT-based approaches are the current state-of-the-art in many natural language processing (NLP) tasks; however, their application to document classification on long clinical texts is limited. In this work, we introduce four methods to scale BERT, which by default can only handle input sequences up to approximately 400 words long, to perform document classification on clinical texts several thousand words long. We compare these methods against two much simpler architectures - a word-level convolutional neural network and a hierarchical self-attention network - and show that BERT often cannot beat these simpler baselines when classifying MIMIC-III discharge summaries and SEER cancer pathology reports. In our analysis, we show that two key components of BERT - pretraining and WordPiece tokenization - may actually be inhibiting BERT's performance on clinical text classification tasks where the input document is several thousand words long and where correctly identifying labels may depend more on identifying a few key words or phrases rather than understanding the contextual meaning of sequences of text.
- Published
- 2021
- Full Text
- View/download PDF