AbstractThe field of AI has proven that organization can benefit from using AI, but that there are risks involved as well. It’s necessary to address these risks in order to implement AI in a responsible way. In this research, a framework is proposed that can be used to deal with these risks. The main research question of this paper is ‘How should a framework be developed that can be used to deal with risks in the context of Responsible AI?’
This research paper describes the way in which the RAI Risk Framework is developed and provides a visual presentation of this framework. Aligned with the principles of RAI, the focus of the framework is ‘a responsible implementation’. The five principles which are used in the framework are: ethics, responsibility, accountability, privacy & security and explainability.
|Date of Award||31 Mar 2022|
|Supervisor||Laury Bollen (Examiner) & Tim Huygh (Co-assessor)|
- Responsible AI
- Risk framework
- AI Governance
- RAI principles