Causal Inference and Bias in Learning Analytics: A Primer on Pitfalls Using Directed Acyclic Graphs

Joshua Weidlich, Dragan Gašević, Hendrik Drachsler

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

As a research field geared toward understanding and improving learning, Learning Analytics (LA) must be able to provide empirical support for causal claims. However, as a highly applied field, tightly controlled randomized experiments are not always feasible nor desirable. Instead, researchers often rely on observational data, based on which they may be reluctant to draw causal inferences. The past decades have seen much progress concerning causal inference in the absence of experimental data. This paper introduces directed acyclic graphs (DAGs), an increasingly popular tool to visually determine the validity of causal claims. Based on this, three basic pitfalls are outlined: confounding bias, overcontrol bias, and collider bias. Further, the paper shows how these pitfalls may be present in the published LA literature alongside possible remedies. Finally, this approach is discussed in light of practical constraints and the need for theoretical development.

Original languageEnglish
Pages (from-to)183–199
Number of pages17
JournalJournal of Learning Analytics
Volume9
Issue number3
DOIs
Publication statusPublished - 16 Dec 2022

Keywords

  • DAG
  • LA
  • Learning analytics
  • bias
  • causal inference
  • directed acyclic graphs
  • observational research
  • research design

Fingerprint

Dive into the research topics of 'Causal Inference and Bias in Learning Analytics: A Primer on Pitfalls Using Directed Acyclic Graphs'. Together they form a unique fingerprint.

Cite this