Abstract
We introduce the Multimodal Learning Analytics Pipeline, a generic approach for collecting and exploiting multimodal data to support learning activities across physical and digital spaces. The MMLA Pipeline facilitates researchers in setting up their multimodal experiments, reducing setup and configuration time required for collecting meaningful datasets. Using the MMLA Pipeline, researchers can decide to use a set of custom sensors to track different modalities, including behavioural cues or affective states. Hence, researchers can quickly obtain multimodal sessions consisting of synchronised sensor data and video recordings. They can analyse and annotate the sessions recorded and train machine learning algorithms to classify or predict the patterns investigated.
Original language | English |
---|---|
Title of host publication | Proceedings of the Artificial Intelligence and Adaptive Education Conference - AIAED'19 |
Place of Publication | Beijing, China |
Pages | 1-2 |
Number of pages | 2 |
Publication status | Published - 23 May 2019 |
Event | 4th International Conference on AI + Adaptive Education - Beijing, China Duration: 24 May 2019 → 25 May 2019 https://www.easychair.org/cfp/AIAED-19 |
Conference
Conference | 4th International Conference on AI + Adaptive Education |
---|---|
Abbreviated title | AIAED 2019 |
Country/Territory | China |
City | Beijing |
Period | 24/05/19 → 25/05/19 |
Internet address |