One counterfactual does not make an explanation

Raphaela Butz, Arjen Hommersom, Marco Barenkamp, Hans van Ditmarsch

Research output: Contribution to conferencePaperAcademic

Abstract

Counterfactual explanations gained popularity in artificial
intelligence over the last years. It is well-known that it is possible to
generate counterfactuals from causal Bayesian networks, but there is no
indication which of them are useful for explanatory purposes. In this
paper, we examine what type of counterfactuals are perceived as more
useful explanations for the end user. For this purpose we have conducted
a questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that change
a direct cause. The results of the questionnaire showed that actionable
counterfactuals are preferred regardless of being the direct cause or having a longer causal chain.
Original languageEnglish
Pages1-11
Publication statusPublished - 2022
EventBNAIC/BeNeLearn 2022 - Mechelen, Belgium
Duration: 7 Nov 20229 Nov 2022

Conference

ConferenceBNAIC/BeNeLearn 2022
Country/TerritoryBelgium
CityMechelen
Period7/11/229/11/22

Fingerprint

Dive into the research topics of 'One counterfactual does not make an explanation'. Together they form a unique fingerprint.

Cite this