Abstract
Counterfactual explanations gained popularity in artificial
intelligence over the last years. It is well-known that it is possible to
generate counterfactuals from causal Bayesian networks, but there is no
indication which of them are useful for explanatory purposes. In this
paper, we examine what type of counterfactuals are perceived as more
useful explanations for the end user. For this purpose we have conducted
a questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that change
a direct cause. The results of the questionnaire showed that actionable
counterfactuals are preferred regardless of being the direct cause or having a longer causal chain.
intelligence over the last years. It is well-known that it is possible to
generate counterfactuals from causal Bayesian networks, but there is no
indication which of them are useful for explanatory purposes. In this
paper, we examine what type of counterfactuals are perceived as more
useful explanations for the end user. For this purpose we have conducted
a questionnaire to test whether counterfactuals that change an actionable cause are considered more useful than counterfactuals that change
a direct cause. The results of the questionnaire showed that actionable
counterfactuals are preferred regardless of being the direct cause or having a longer causal chain.
Original language | English |
---|---|
Pages | 1-11 |
Publication status | Published - 2022 |
Event | BNAIC/BeNeLearn 2022 - Mechelen, Belgium Duration: 7 Nov 2022 → 9 Nov 2022 |
Conference
Conference | BNAIC/BeNeLearn 2022 |
---|---|
Country/Territory | Belgium |
City | Mechelen |
Period | 7/11/22 → 9/11/22 |