Abstract
Artificial Intelligence (A.I.) is quickly becoming part of our daily lives. Therefore we, as a society, are required to determine policies concerning its lifecycle. The Ethics Guidelines for Trustworthy A.I. (EGfTA) provides an initial set of guidelines for dealing with A.I. during its lifecycle (AI HLEG, 2019). However, it is yet unclear whether these guidelines find their origin in scientific research or are based on assumptions.In this research, we have looked into the relationship between two EGfTA variables, namely transparency (and explainability) and human agency and human oversight. We have distilled a model from existing literature using input from several studies and combined, adjusted and enriched them. Furthermore, we executed a quantitative study with 149 respondents based on this distilled model.
From analysing these surveys we found that there is a relationship between transparency (and explainability) and human agency and human oversight within online media-based platforms. Higher levels of perceived performance lead to higher levels of human agency and human oversight within online media-based platforms. Herewith finding the scientific and empirical proof for the relationship.
Additionally, we also provide a (theoretical) research model that can be altered for future research into the EGfTA requirements and their relationship, which can be used as a first step for future research into the variable(s) human agency and human oversight and also suggestions on how to improve the model and research design.
Date of Award | 2 Feb 2023 |
---|---|
Original language | English |
Supervisor | Laury Bollen (Examiner) & Tim Huygh (Co-assessor) |
Master's Degree
- Master Business Process management & IT (BPMIT)