Quantifying Explainablility for Machine Learning Models

  • A (Alexandra) Most van der

Student thesis: Master's Thesis

Abstract

In the development of machine learning models, the comprehensibility of a model is barely considered, while the amount of newly developed applications is visibly increasing and so is the introduction of more advanced machine learning algorithms. There is no unambiguous evaluation method of the interpretability of a model, yet. With Post-hoc model-agnostic interpretation methods, an e˙ort is made to quantify the explainability of arbitrary machine learning models. Two dimensions are proposed; accuracy and complexity, to describe their relationship with the interpretability of a model. In this report, it is shown that the dimensions are independent and that there is an inverse relationship between the complexity and explain-ability. However, more dimensions with model-agnostic measures are needed to return a fully-expressible quantification for the explainability. At last, the suggestion is made that the exact height for explainability of a model is in a severe relationship with the prediction task and underlying set of data.
Date of Award15 Aug 2020
Original languageEnglish
SupervisorDeniz Iren (Examiner) & Stefano Bromuri (Co-assessor)

Keywords

  • Quantify Explainability
  • Interpretable Machine Learning
  • Explainable AI
  • Accuracy · Models Complexity

Master's Degree

  • Master Business Process management & IT (BPMIT)

Cite this

'