Explaining the Most Probable Explanation

Research output: Chapter in Book/Report/Conference proceedingConference article in proceedingAcademicpeer-review


The use of Bayesian networks has been shown to be powerful for supporting decision making, for example in a medical context. A particularly useful inference task is the most probable explanation (MPE), which provides the most likely assignment to all the random variables that is consistent with the given evidence. A downside of this MPE solution is that it is static and not very informative for (medical) domain experts. In our research to overcome this problem, we were inspired by recent research results on augmenting Bayesian networks with argumentation theory. We use arguments to generate explanations of the MPE solution in natural language to make it more understandable for the domain expert. Moreover, the approach allows decision makers to further explore explanations of different scenarios providing more insight why certain alternative explanations are considered less probable than the MPE solution.
Original languageEnglish
Title of host publicationScalable Uncertainty Management
Subtitle of host publication12th International Conference, SUM 2018, Milan, Italy, October 3-5, 2018, Proceedings
EditorsDavide Ciucci, Gabriella Pasi, Barbara Vantaggi
Place of PublicationCham
PublisherSpringer International Publishing AG
Number of pages14
ISBN (Print)978-3-030-00461-3
Publication statusPublished - 2018
EventInternational Conference on Scalable Uncertainty Management: Scalable Uncertainty Management - Milan, Italy
Duration: 3 Oct 20185 Oct 2018

Publication series

SeriesLecture Notes in Computer Science


ConferenceInternational Conference on Scalable Uncertainty Management
Abbreviated titleSUM 2018
Internet address


Dive into the research topics of 'Explaining the Most Probable Explanation'. Together they form a unique fingerprint.

Cite this