Back to Search Start Over

Automated Summary Scoring with Readerbench

Authors :
Botarleanu, Robert-Mihai
Dascalu, Mihai
Allen, Laura K.
Crossley, Scott Andrew
McNamara, Danielle S.
Source :
Grantee Submission. 2021Paper presented at ITS 2021.
Publication Year :
2021

Abstract

Text summarization is an effective reading comprehension strategy. However, summary evaluation is complex and must account for various factors including the summary and the reference text. This study examines a corpus of approximately 3,000 summaries based on 87 reference texts, with each summary being manually scored on a 4-point Likert scale. Machine learning models leveraging Natural Language Processing (NLP) techniques were trained to predict the extent to which summaries capture the main idea of the target text. The NLP models combined both domain and language independent textual complexity indices from the ReaderBench framework, as well as state-of-the-art language models and deep learning architectures to provide semantic contextualization. The models achieve low errors -- normalized MAE ranging from 0.13-0.17 with corresponding R2 values of up to 0.46. Our approach consistently outperforms baselines that use TF-IDF vectors and linear models, as well as Transfomer-based regression using BERT. These results indicate that NLP algorithms that combine linguistic and semantic indices are accurate and robust, while ensuring generalizability to a wide array of topics. [This paper was published in: A. I. Cristea and C. Troussas (Eds.), "ITS 2021: Intelligent Tutoring Systems proceedings," pp. 321-332, 2021. Springer, Cham Switzerland.]

Details

Language :
English
Database :
ERIC
Journal :
Grantee Submission
Publication Type :
Conference
Accession number :
ED630662
Document Type :
Speeches/Meeting Papers<br />Reports - Research
Full Text :
https://doi.org/10.1007/978-3-030-80421-3_35