Explainability of a black-box system in AI-systems

  • T.R. van Diemen

Student thesis: Master's Thesis


According to experts, in the coming decades, superhuman capabilities will be achieved in strategic areas with the help of Artificial Intelligence (AI), offering enormous opportunities for progress in fields such as medicine and health, transportation, energy, education, science, economic growth, and ecological sustainability. However, the rapid development of AI raises ethical concerns about how the technology is being applied. To further promote the acceptance and integration of AI in society, trust is crucial, and stakeholders realize the need for good AI governance. This includes a variety of tools and solutions that influence the development and application of AI, such as promoting standards, ethics, and a value framework. Accountability is seen as crucial for creating and maintaining user trust in AI systems, with transparency playing an important role. However, the concept of transparency is complex and has multiple layers and perspectives, involving various levels and approaches to transparency and multiple stakeholders. Moreover, transparency can also have drawbacks, making it not always the best solution to fully open the ”black-box” of AI systems. This research aims to reduce the lack of transparency by examining various explanation techniques. These techniques include global-agnostic explanations for black-box AI systems, where the entire model is considered, and explanations are given for the output variable based on all input variables. The research focuses on the impact of transparency on users’ understanding and trust of AI systems. The research shows that transparency has a positive effect on understanding but has only a limited effect on users’ trust in AI systems.
Date of Award2 Jul 2023
Original languageDutch
SupervisorLaury Bollen (Supervisor) & Tim Huygh (Co-assessor)

Master's Degree

  • Master Business Process management & IT (BPMIT)

Cite this