Abstract
Stealth assessment is a methodology that utilizes machine learning (ML) for processing unobtrusively collected data from serious games to produce inferences regarding learners' mastery level. Although stealth assessment can produce valid and reliable assessments, its robustness over a wide a range of conditions has not been examined yet. The main reason is its complex, laborious, and time-consuming practical application. Therefore, its exposure to different conditions has been limited, as well as its wider uptake from the serious game community. Nevertheless, a framework for developing a generic tool has been proposed to lift its barriers. In this article, a generic stealth assessment (SA) software tool was developed based on this framework to examine the robustness of the stealth assessment methodology under various conditions. In specific, the conditions relate to processing data sets of different distribution types and sizes (960 data sets containing a total of 72.336.000 data points are used for this reason), utilizing two different ML algorithms (Gaussian Naïve Bayes Network and C4.5), and using statistical models relating to two different competency constructs. Results show that stealth assessment is a robust methodology, whilst the generic SA tool is a highly accurate tool capable of handling efficiently a wide range of conditions.
Original language | English |
---|---|
Pages (from-to) | 180-192 |
Number of pages | 13 |
Journal | IEEE Transactions on Games |
Volume | 13 |
Issue number | 2 |
DOIs | |
Publication status | Published - Jun 2021 |
Keywords
- Games
- Generic tool
- Machine learning
- Robustness
- Software tools
- Task analysis
- Tools
- generic tool
- machine learning
- machine learning (ML)
- robustness
- serious games
- simulation
- stealth assessment