AUTOMATIC FEEDBACK GENERATION USING A PCK-INFORMED LARGE LANGUAGE MODEL
: A CASE STUDY OF FEEDBACK ON VARIABLES IN STUDENT PROGRAMS

  • Jaap Geurts

Student thesis: Master's Thesis

Abstract

Feedback is an important component of the learning process and good feedback is most effec-tive when it attempts to close the gap between the learners’ current understanding and their goals [Hattie and Timperley, 2007]. Recognizing this, ICT universities of applied sciences in the Netherlands are shifting their educational model towards a differentiated and personal learning model in which feedback is given a prominent role and student-teacher interaction is placed at the centre of their education model. However, demand for teacher time may increase as pro-viding personalized feedback is time consuming and not easily scalable. Some of this feedback could potentially be automated and provided by Large Language Models (LLM) considering recent advances in generative AI.
In this study we explored how automated feedback on variable usage in student programs can be generated using a LLM informed by Pedagogical Content Knowledge (PCK). PCK is educational knowledge held by teachers related to a specific topic. It covers knowledge about the learning objectives, students’ difficulties and misconceptions, instructional, assessment strategies and evaluating students’ understanding about that topic.
To achieve this, we selected Design Science Research (DSR) as our main research method as it provides a structured approach for developing and evaluating our automated feedback tool informed by PCK in a real-world context. We began by obtaining PCK on variables by performing a literature review and conducting interviews with teachers of introductory programming at a university of applied science in the Netherlands. This resulted in a detailed categorization of this PCK from which we created a taxonomy.
Next, we investigated the characteristics of effective feedback by conducting a literature review. This resulted in three principles relevant to our study: 1) Specificity and Clarity – Related to relevance of and expression of PCK components; 2) Learning Process – Related to feedback, feed-up, feed-forward; 3) Constructiveness – Related to emotion such as positivity, motivation, engagement. Moreover, from those principles we devised feedback metrics for evaluation pur-poses.
We continued with an investigation into the technical requirements. After comparing two leading LLMs, we selected Anthropic’s Claude-3.5-sonnet as the main model. Based on this, we designed an architecture for a Visual Studio Code plugin, consisting of a ContextSelector responsible for selecting relevant PCK on variables depending on the nature of the feedback request and, an ExperienceTracker, which keeps track of student progress to dynamically adapt feedback style and language.
Finally, we developed a prototype based on this architecture and asked experts and students to evaluate the feedback and assess whether it improves students’ learning experience. While the participants offered suggestions for improvement, our initial findings are encouraging. For example, students report that the received feedback helps build good programming habits. Moreover, they especially appreciated the self-check questions and reported that they would use the tool before approaching a teacher and recommend it to other students. This suggests that feedback generated with a LLM informed by PCK is useful to students.
Date of Award24 Jun 2025
Original languageEnglish
SupervisorEbrahim Rahimi (Examiner) & Clara Maathuis (Co-assessor)

Master's Degree

  • Master Software Engineering

Cite this

'