Abstract
This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.
Original language | English |
---|---|
Title of host publication | LAK19 |
Subtitle of host publication | Proceedings of the 9th International Conference on Learning Analytics and Knowledge |
Place of Publication | New York, NY, USA |
Publisher | acm |
Chapter | 7 |
Pages | 51-60 |
Number of pages | 10 |
Edition | 1 |
ISBN (Electronic) | 9781450362566 |
ISBN (Print) | 9781450362566 |
DOIs | |
Publication status | Published - 4 Mar 2019 |
Keywords
- Internet of Things, Learning Analytics, Multimodal data, sensors