Abstract
Formative feedback, aimed at helping students to improve theirwork, is an important factor in learning. Many tools that offer programming exercises provide automated feedback on student solutions. We have performed a systematic literature review to find out what kind of feedback is provided, which techniques are used to generate the feedback, how adaptable the feedback is, and how these tools are evaluated. We have designed a labelling to classify the tools, and use Narciss' feedback content categories to classify feedback messages. We report on the results of coding a total of 101 tools. We have found that feedback mostly focuses on identifying mistakes and less on fixing problems and taking a next step. Furthermore, teachers cannot easily adapt tools to their own needs. However, the diversity of feedback types has increased over the past decades and new techniques are being applied to generate feedback that is increasingly helpful for students.
Original language | English |
---|---|
Article number | 3 |
Number of pages | 43 |
Journal | ACM Transactions on Computing Education |
Volume | 19 |
Issue number | 1 |
DOIs | |
Publication status | Published - Jan 2019 |
Keywords
- ASSIGNMENTS
- COMPLETION
- COMPUTER
- DIAGNOSIS
- ENVIRONMENT
- INTELLIGENT TUTORING SYSTEMS
- KNOWLEDGE
- SOFTWARE
- STUDENT PROGRAMS
- Systematic literature review
- automated feedback
- learning programming
- programming tools