Effects of prior knowledge and joint attention on learning from eye movement modelling examples

Lucia B. Chisari, Akvilė Mockevičiūtė, Sterre K. Ruitenburg, Lian van Vemde, Ellen M. Kok*, Tamara van Gog

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Eye movement modelling examples (EMMEs) are instructional videos of a model's demonstration and explanation of a task that also show where the model is looking. EMMEs are expected to synchronize students' visual attention with the model's, leading to better learning than regular video modelling examples (MEs). However, synchronization is seldom directly tested. Moreover, recent research suggests that EMMEs might be more effective than ME for low prior knowledge learners. We therefore used a 2 × 2 between‐subjects design to investigate if the effectiveness of EMMEs (EMMEs/ME) is moderated by prior knowledge (high/low, manipulated by pretraining), applying eye tracking to assess synchronization. Contrary to expectations, EMMEs did not lead to higher learning outcomes than ME, and no interaction with prior knowledge was found. Structural equation modelling shows the mechanism through which EMMEs affect learning: Seeing the model's eye movements helped learners to look faster at referenced information, which was associated with higher learning outcomes.
Original languageEnglish
Pages (from-to)569-579
Number of pages11
JournalJournal of Computer Assisted Learning
Volume36
Issue number4
DOIs
Publication statusPublished - Aug 2020
Externally publishedYes

Fingerprint Dive into the research topics of 'Effects of prior knowledge and joint attention on learning from eye movement modelling examples'. Together they form a unique fingerprint.

  • Cite this