Scoring Summaries Using Recurrent Neural Networks

Stefan Ruseti, Mihai Dascalu, Amy M. Johnson, Danielle S. McNamara, Renu Balyan, Kathryn S. McCarthy, Stefan Trausan-Matu, Roger Nkambou (Editor), Roger Azevedo (Editor), Julita Vassileva (Editor)

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review


Summarization enhances comprehension and is considered an effective strategy to promote and enhance learning and deep understanding of texts. However, summarization is seldom implemented by teachers in classrooms because the manual evaluation requires a lot of effort and time. Although the need for automated support is stringent, there are only a few shallow systems available, most of which rely on basic word/n-gram overlaps. In this paper, we introduce a hybrid model that uses state-of-the-art recurrent neural networks and textual complexity indices to score summaries. Our best model achieves over 55% accuracy for a 3-way classification that measures the degree to which the main ideas from the original text are covered by the summary. Our experiments show that the writing style, represented by the textual complexity indices, together with the semantic content grasped within the summary are the best predictors, when combined. To the best of our knowledge, this is the first work of its kind that uses RNNs for scoring and evaluating summaries.
Original languageEnglish
Title of host publicationIntelligent Tutoring Systems.
Subtitle of host publicationITS 2018.
EditorsR. Nkambou, R. Azevedo, J. Vassileva
Number of pages11
Publication statusPublished - 17 May 2018
Externally publishedYes
Event Intelligent Tutoring Systems. ITS 2018. : Lecture Notes in Computer Science. - Montreal, Canada
Duration: 11 Jun 201815 Jun 2018
Conference number: vol 10858


Conference Intelligent Tutoring Systems. ITS 2018.


Dive into the research topics of 'Scoring Summaries Using Recurrent Neural Networks'. Together they form a unique fingerprint.

Cite this