Intelligent Feedback on Hypothesis Testing

Sietske Tacoma*, Bastiaan Heeren, Johan Jeuring, Paul Drijvers

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review


Hypothesis testing involves a complex stepwise procedure that is challenging for many students in introductory university statistics courses. In this paper we assess how feedback from an Intelligent Tutoring System can address the logic of hypothesis testing and whether such feedback contributes to first-year social sciences students’ proficiency in carrying out hypothesis tests. Feedback design combined elements of the model-tracing and constraint-based modeling paradigms, to address both the individual steps as well as the relations between steps. To evaluate the feedback, students in an experimental group (N = 163) received the designed intelligent feedback in six hypothesis-testing construction tasks, while students in a control group (N = 151) only received stepwise verification feedback in these tasks. Results showed that students receiving intelligent feedback spent more time on the tasks, solved more tasks and made fewer errors than students receiving only verification feedback. These positive results did not transfer to follow-up tasks, which might be a consequence of the isolated nature of these tasks. We conclude that the designed feedback may support students in learning to solve hypothesis-testing construction tasks independently and that it facilitates the creation of more hypothesis-testing construction tasks.
Original languageEnglish
Pages (from-to)616-636
Number of pages21
JournalInternational Journal of Artificial Intelligence in Education
Issue number4
Early online date9 Oct 2020
Publication statusPublished - Nov 2020


  • Feedback
  • Hypothesis testing
  • Intelligent tutoring systems
  • Statistics education


Dive into the research topics of 'Intelligent Feedback on Hypothesis Testing'. Together they form a unique fingerprint.

Cite this