Back to Search Start Over

Label informed hierarchical transformers for sequential sentence classification in scientific abstracts.

Authors :
Tokala, Yaswanth Sri Sai Santosh
Aluru, Sai Saketh
Vallabhajosyula, Anoop
Sanyal, Debarshi Kumar
Das, Partha Pratim
Source :
Expert Systems; Jul2023, Vol. 40 Issue 6, p1-13, 13p
Publication Year :
2023

Abstract

Segmenting scientific abstracts into discourse categories like background, objective, method, result, and conclusion is useful in many downstream tasks like search, recommendation and summarization. This task of classifying each sentence in the abstract into one of a given set of discourse categories is called sequential sentence classification. Existing machine learning‐based approaches to this problem consider the content of only the abstract to obtain the neural representation of each sentence, which is then labelled with a discourse category. But this ignores the semantic information offered by the discourse labels themselves. In this paper, we propose LIHT, Label Informed Hierarchical Transformers – a method for sequential sentence classification that explicitly and hierarchically exploits the semantic information in the labels to learn label‐aware neural sentence representations. The hierarchical model helps to capture not only the fine‐grained interactions between the discourse labels and the words in the abstract at the sentence level but also the potential dependencies that may exist in the label sequence. Thus, LIHT generates label‐aware contextual sentence representations that are then labelled with a conditional random field. We evaluate LIHT on three publicly available datasets, namely, PUBMED‐RCT, NICTA‐PIBOSO and CSAbstract. The incremental gain in F1‐score in all the three cases over the respective state‐of‐the‐art approaches is around 1%. Though the gains are modest, LIHT establishes a new performance benchmark for this task and is a novel technique of independent interest. We also perform an ablation study to identify the contribution of each component of LIHT in the observed performance, and a case study to visualize the roles of the different components of our model. [ABSTRACT FROM AUTHOR]

Details

Language :
English
ISSN :
02664720
Volume :
40
Issue :
6
Database :
Complementary Index
Journal :
Expert Systems
Publication Type :
Academic Journal
Accession number :
164116255
Full Text :
https://doi.org/10.1111/exsy.13238