From the Automated Assessment of Student Essay Content to Highly Informative Feedback: a Case Study

Sebastian Gombert*, Aron Fink, Tornike Giorgashvili, Ioana Jivet, Daniele Di Mitri, Jane Yau, Andreas Frey, Hendrik Drachsler

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Various studies empirically proved the value of highly informative feedback for enhancing learner success. However, digital educational technology has yet to catch up as automated feedback is often provided shallowly. This paper presents a case study on implementing a pipeline that provides German-speaking university students enrolled in an introductory-level educational psychology lecture with content-specific feedback for a lecture assignment. In the assignment, students have to discuss the usefulness and educational grounding (i.e., connection to working memory, metacognition or motivation) of ten learning tips presented in a video within essays. Through our system, students received feedback on the correctness of their solutions and content areas they needed to improve. For this purpose, we implemented a natural language processing pipeline with two steps: (1) segmenting the essays and (2) predicting codes from the resulting segments used to generate feedback texts. As training data for the model in each processing step, we used 689 manually labelled essays submitted by the previous student cohort. We then evaluated approaches based on GBERT, T5, and bag-of-words baselines for scoring them. Both pipeline steps, especially the transformer-based models, demonstrated high performance. In the final step, we evaluated the feedback using a randomised controlled trial. The control group received feedback as usual (essential feedback), while the treatment group received highly informative feedback based on the natural language processing pipeline. We then used a six items long survey to test the perception of feedback. We conducted an ordinary least squares analysis to model these items as dependent variables, which showed that highly informative feedback had positive effects on helpfulness and reflection.

Original languageEnglish
Pages (from-to)1378-1416
Number of pages39
JournalInternational Journal of Artificial Intelligence in Education
Volume34
Issue number4
Early online date25 Jan 2024
DOIs
Publication statusPublished - Dec 2024

Keywords

  • Analytic scoring
  • Automated essay scoring (AES)
  • Automated feedback
  • Content scoring
  • Writing assessment

Fingerprint

Dive into the research topics of 'From the Automated Assessment of Student Essay Content to Highly Informative Feedback: a Case Study'. Together they form a unique fingerprint.

Cite this