Evaluating the Usefulness of Counterfactual Explanations from Bayesian Networks

Raphaela Butz*, Arjen Hommersom*, Renée Schulz*, Hans van Ditmarsch*

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

Abstract

Bayesian networks are commonly used for learning with uncertainty and incorporating expert knowledge. However, they are hard to interpret, especially when the network structure is complex. Methods used to explain Bayesian networks operate under certain assumptions about what constitutes the best explanation, without actually verifying these assumptions. One such common assumption is that a shorter length of the causal chain of one variable to another enhances its explanatory strength. Counterfactual explanations gained popularity in artificial intelligence over the last years. It is well-known that it is possible to generate counterfactuals from causal Bayesian networks, but there is no indication which of them are useful for explanatory purposes. In this paper, we examine how to apply findings from psychology to search for counterfactuals that are perceived as more useful explanations for the end user. For this purpose, we have conducted a questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that change a direct cause. The results of the questionnaire indicate that actionable counterfactuals are preferred regardless of being the direct cause or having a longer causal chain.
Original languageEnglish
Number of pages13
JournalHuman-Centric Intelligent Systems
DOIs
Publication statusE-pub ahead of print - 4 Apr 2024

Fingerprint

Dive into the research topics of 'Evaluating the Usefulness of Counterfactual Explanations from Bayesian Networks'. Together they form a unique fingerprint.

Cite this