One counterfactual does not make an explanation

Raphaela Butz, Arjen Hommersom, Marco Barenkamp, Hans van Ditmarsch

Research output: Contribution to conferencePaperAcademic


Counterfactual explanations gained popularity in artificial
intelligence over the last years. It is well-known that it is possible to
generate counterfactuals from causal Bayesian networks, but there is no
indication which of them are useful for explanatory purposes. In this
paper, we examine what type of counterfactuals are perceived as more
useful explanations for the end user. For this purpose we have conducted
a questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that change
a direct cause. The results of the questionnaire showed that actionable
counterfactuals are preferred regardless of being the direct cause or having a longer causal chain.
Original languageEnglish
Publication statusPublished - 2022
EventBNAIC/BeNeLearn 2022 - Mechelen, Belgium
Duration: 7 Nov 20229 Nov 2022


ConferenceBNAIC/BeNeLearn 2022


Dive into the research topics of 'One counterfactual does not make an explanation'. Together they form a unique fingerprint.

Cite this