Back to Search Start Over

ANCHORING CONCEPTS INFLUENCE ESSAY CONCEPTUAL STRUCTURE AND TEST PERFORMANCE.

Authors :
Clariana, Roy B.
Solnosky, Ryan
Source :
Proceedings of the IADIS International Conference on Cognition & Exploratory Learning in Digital Age; 2023, p241-248, 8p
Publication Year :
2023

Abstract

This quasi-experimental study seeks to improve the conceptual quality of summary essays by comparing two conditions, essay prompts with or without a list of 13 broad concepts, the concepts were selected across a continuum of the 100 most frequent words in the lesson materials. It is anticipated that only the most central concepts will be used as "anchors" when writing. Participants (n = 90) in an Architectural Engineering undergraduate course read the assigned lesson textbook chapter and attended lectures and labs, then in a final lab session were asked to write a 300-word summary of the lesson content. Data consists of the essays converted to networks and the end-of-unit multiple choice test. Compared to the expert network benchmark, the essay networks of those receiving the broad concepts in the writing prompt were not significantly different from those who did not receive these concepts. However those receiving the broad concepts were significantly more like peer essay networks (mental model convergence) and like the networks of the two PowerPoint lectures but neither were like the textbook chapter. Further, those receiving the broad concepts performed significantly better on the end-of-unit test than those not receiving the concepts. Term frequency analysis of the essays indicates as expected that the most network-central concepts had a greater frequency in essays, the other terms frequencies were remarkably the same for both the terms and no terms groups, suggesting a similar underlying conceptual mental model of this lesson content. To further explore the influence of anchoring concepts in summary writing prompts, essays were generated with the same two summary writing prompts using OpenAI (ChatGPT) and Google Bard, plus a new prompt that used the 13 most central concepts from the expert's network. The quality of the essay networks for both AI systems were equivalent to the students' essay networks for the broad concepts and for the no concept treatments. However, the AI essays derived with the 13 most central concepts were significantly better (more like the expert network) than the students and AI essays derived with broad concepts or no concepts treatments. In addition, Bard and OpenAI used several of the same concepts at a higher frequency than the students suggesting that the two AI systems have more similar knowledge graphs of this content. In sum, adding 13 broad conceptual terms to a summary writing prompt improved both structural and declarative knowledge outcomes, but adding 13 most central concepts may be even better. More research is needed to understand how including concepts and other terms in a writing prompt influences students' essay conceptual structure and subsequent test performance. [ABSTRACT FROM AUTHOR]

Details

Language :
English
Database :
Supplemental Index
Journal :
Proceedings of the IADIS International Conference on Cognition & Exploratory Learning in Digital Age
Publication Type :
Conference
Accession number :
174156451