Explaining the Most Probable Explanation

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

Abstract

The use of Bayesian networks has been shown to be powerful for supporting decision making, for example in a medical context. A particularly useful inference task is the most probable explanation (MPE), which provides the most likely assignment to all the random variables that is consistent with the given evidence. A downside of this MPE solution is that it is static and not very informative for (medical) domain experts. In our research to overcome this problem, we were inspired by recent research results on augmenting Bayesian networks with argumentation theory. We use arguments to generate explanations of the MPE solution in natural language to make it more understandable for the domain expert. Moreover, the approach allows decision makers to further explore explanations of different scenarios providing more insight why certain alternative explanations are considered less probable than the MPE solution.
Original languageEnglish
Title of host publicationScalable Uncertainty Management
Subtitle of host publication12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings
EditorsDavide Ciucci, Gabriella Pasi, Barbara Vantaggi
Place of PublicationCham
PublisherSpringer International Publishing AG
Pages50-63
Number of pages14
ISBN (Print)978-3-030-00461-3
DOIs
Publication statusPublished - 2018
EventInternational Conference on Scalable Uncertainty Management: Scalable Uncertainty Management - Milan, Italy
Duration: 3 Oct 20185 Oct 2018
https://link.springer.com/book/10.1007/978-3-030-00461-3

Publication series

NameLecture Notes in Computer Science
PublisherSpringer
Volume11142

Conference

ConferenceInternational Conference on Scalable Uncertainty Management
Abbreviated titleSUM 2018
CountryItaly
CityMilan
Period3/10/185/10/18
Internet address

Fingerprint

Bayesian networks
Random variables
Decision making

Cite this

Butz, R., Hommersom, A., & van Eekelen, M. (2018). Explaining the Most Probable Explanation. In D. Ciucci, G. Pasi, & B. Vantaggi (Eds.), Scalable Uncertainty Management: 12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings (pp. 50-63). (Lecture Notes in Computer Science; Vol. 11142). Cham: Springer International Publishing AG. https://doi.org/10.1007/978-3-030-00461-3_4
Butz, Raphaela ; Hommersom, Arjen ; van Eekelen, Marko. / Explaining the Most Probable Explanation. Scalable Uncertainty Management: 12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings. editor / Davide Ciucci ; Gabriella Pasi ; Barbara Vantaggi. Cham : Springer International Publishing AG, 2018. pp. 50-63 (Lecture Notes in Computer Science).
@inproceedings{4d70fe54888c43a3908971c53bb7c7fa,
title = "Explaining the Most Probable Explanation",
abstract = "The use of Bayesian networks has been shown to be powerful for supporting decision making, for example in a medical context. A particularly useful inference task is the most probable explanation (MPE), which provides the most likely assignment to all the random variables that is consistent with the given evidence. A downside of this MPE solution is that it is static and not very informative for (medical) domain experts. In our research to overcome this problem, we were inspired by recent research results on augmenting Bayesian networks with argumentation theory. We use arguments to generate explanations of the MPE solution in natural language to make it more understandable for the domain expert. Moreover, the approach allows decision makers to further explore explanations of different scenarios providing more insight why certain alternative explanations are considered less probable than the MPE solution.",
author = "Raphaela Butz and Arjen Hommersom and {van Eekelen}, Marko",
year = "2018",
doi = "10.1007/978-3-030-00461-3_4",
language = "English",
isbn = "978-3-030-00461-3",
series = "Lecture Notes in Computer Science",
publisher = "Springer International Publishing AG",
pages = "50--63",
editor = "Davide Ciucci and Gabriella Pasi and Barbara Vantaggi",
booktitle = "Scalable Uncertainty Management",
address = "Switzerland",

}

Butz, R, Hommersom, A & van Eekelen, M 2018, Explaining the Most Probable Explanation. in D Ciucci, G Pasi & B Vantaggi (eds), Scalable Uncertainty Management: 12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings. Lecture Notes in Computer Science, vol. 11142, Springer International Publishing AG, Cham, pp. 50-63, International Conference on Scalable Uncertainty Management, Milan, Italy, 3/10/18. https://doi.org/10.1007/978-3-030-00461-3_4

Explaining the Most Probable Explanation. / Butz, Raphaela; Hommersom, Arjen; van Eekelen, Marko.

Scalable Uncertainty Management: 12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings. ed. / Davide Ciucci; Gabriella Pasi; Barbara Vantaggi. Cham : Springer International Publishing AG, 2018. p. 50-63 (Lecture Notes in Computer Science; Vol. 11142).

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review

TY - GEN

T1 - Explaining the Most Probable Explanation

AU - Butz, Raphaela

AU - Hommersom, Arjen

AU - van Eekelen, Marko

PY - 2018

Y1 - 2018

N2 - The use of Bayesian networks has been shown to be powerful for supporting decision making, for example in a medical context. A particularly useful inference task is the most probable explanation (MPE), which provides the most likely assignment to all the random variables that is consistent with the given evidence. A downside of this MPE solution is that it is static and not very informative for (medical) domain experts. In our research to overcome this problem, we were inspired by recent research results on augmenting Bayesian networks with argumentation theory. We use arguments to generate explanations of the MPE solution in natural language to make it more understandable for the domain expert. Moreover, the approach allows decision makers to further explore explanations of different scenarios providing more insight why certain alternative explanations are considered less probable than the MPE solution.

AB - The use of Bayesian networks has been shown to be powerful for supporting decision making, for example in a medical context. A particularly useful inference task is the most probable explanation (MPE), which provides the most likely assignment to all the random variables that is consistent with the given evidence. A downside of this MPE solution is that it is static and not very informative for (medical) domain experts. In our research to overcome this problem, we were inspired by recent research results on augmenting Bayesian networks with argumentation theory. We use arguments to generate explanations of the MPE solution in natural language to make it more understandable for the domain expert. Moreover, the approach allows decision makers to further explore explanations of different scenarios providing more insight why certain alternative explanations are considered less probable than the MPE solution.

U2 - 10.1007/978-3-030-00461-3_4

DO - 10.1007/978-3-030-00461-3_4

M3 - Conference article in proceeding

SN - 978-3-030-00461-3

T3 - Lecture Notes in Computer Science

SP - 50

EP - 63

BT - Scalable Uncertainty Management

A2 - Ciucci, Davide

A2 - Pasi, Gabriella

A2 - Vantaggi, Barbara

PB - Springer International Publishing AG

CY - Cham

ER -

Butz R, Hommersom A, van Eekelen M. Explaining the Most Probable Explanation. In Ciucci D, Pasi G, Vantaggi B, editors, Scalable Uncertainty Management: 12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings. Cham: Springer International Publishing AG. 2018. p. 50-63. (Lecture Notes in Computer Science). https://doi.org/10.1007/978-3-030-00461-3_4