Investigating the understandability of XAI methods for enhanced user experience: When Bayesian network users became detectives

Raphaela Butz, Renée Schulz, Arjen Hommersom, Marko van Eekelen

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

In the medical domain, the uptake of an AI tool crucially depends on whether clinicians are confident that they understand the tool. Bayesian networks are popular AI models in the medical domain, yet, explaining predictions from Bayesian networks to physicians and patients is non-trivial. Various explanation methods
for Bayesian network inference have appeared in literature, focusing on different aspects of the underlying reasoning. While there has been a lot of technical research, there is little known about the actual user experience of such methods. In this paper, we present results of a study in which four different explanation approaches were evaluated through a survey by questioning a group of human participants on their perceived
understanding in order to gain insights about their user experience.
Original languageEnglish
Article number102438
Number of pages12
JournalArtificial Intelligence in Medicine
Volume134
DOIs
Publication statusPublished - Dec 2022

Keywords

  • Bayesian networks
  • Explainable AI
  • User experience

Fingerprint

Dive into the research topics of 'Investigating the understandability of XAI methods for enhanced user experience: When Bayesian network users became detectives'. Together they form a unique fingerprint.

Cite this