Back to Search
Start Over
Pre-training phenotyping classifiers.
- Source :
-
Journal of biomedical informatics [J Biomed Inform] 2021 Jan; Vol. 113, pp. 103626. Date of Electronic Publication: 2020 Nov 28. - Publication Year :
- 2021
-
Abstract
- Recent transformer-based pre-trained language models have become a de facto standard for many text classification tasks. Nevertheless, their utility in the clinical domain, where classification is often performed at encounter or patient level, is still uncertain due to the limitation on the maximum length of input. In this work, we introduce a self-supervised method for pre-training that relies on a masked token objective and is free from the limitation on the maximum input length. We compare the proposed method with supervised pre-training that uses billing codes as a source of supervision. We evaluate the proposed method on one publicly-available and three in-house datasets using the standard evaluation metrics such as the area under the ROC curve and F1 score. We find that, surprisingly, even though self-supervised pre-training performs slightly worse than supervised, it still preserves most of the gains from pre-training.<br /> (Copyright © 2020 Elsevier Inc. All rights reserved.)
- Subjects :
- Humans
ROC Curve
Language
Natural Language Processing
Subjects
Details
- Language :
- English
- ISSN :
- 1532-0480
- Volume :
- 113
- Database :
- MEDLINE
- Journal :
- Journal of biomedical informatics
- Publication Type :
- Academic Journal
- Accession number :
- 33259943
- Full Text :
- https://doi.org/10.1016/j.jbi.2020.103626