Abstract
In the medical domain, the uptake of an AI tool crucially depends on whether clinicians are confident that they understand the tool. Bayesian networks are popular AI models in the medical domain, yet, explaining predictions from Bayesian networks to physicians and patients is non-trivial. Various explanation methods
for Bayesian network inference have appeared in literature, focusing on different aspects of the underlying reasoning. While there has been a lot of technical research, there is little known about the actual user experience of such methods. In this paper, we present results of a study in which four different explanation approaches were evaluated through a survey by questioning a group of human participants on their perceived
understanding in order to gain insights about their user experience.
for Bayesian network inference have appeared in literature, focusing on different aspects of the underlying reasoning. While there has been a lot of technical research, there is little known about the actual user experience of such methods. In this paper, we present results of a study in which four different explanation approaches were evaluated through a survey by questioning a group of human participants on their perceived
understanding in order to gain insights about their user experience.
Original language | English |
---|---|
Article number | 102438 |
Number of pages | 12 |
Journal | Artificial Intelligence in Medicine |
Volume | 134 |
DOIs | |
Publication status | Published - Dec 2022 |
Keywords
- Bayesian networks
- Explainable AI
- User experience