The Multimodal Learning Analytics Pipeline

D. Di Mitri, J. Schneider Barnes, M.M. Specht, H.J. Drachsler

    Research output: Chapter in Book/Report/Conference proceedingConference Article in proceedingAcademicpeer-review

    7 Downloads (Pure)

    Abstract

    We introduce the Multimodal Learning Analytics Pipeline, a generic approach for collecting and exploiting multimodal data to support learning activities across physical and digital spaces. The MMLA Pipeline facilitates researchers in setting up their multimodal experiments, reducing setup and configuration time required for collecting meaningful datasets. Using the MMLA Pipeline, researchers can decide to use a set of custom sensors to track different modalities, including behavioural cues or affective states. Hence, researchers can quickly obtain multimodal sessions consisting of synchronised sensor data and video recordings. They can analyse and annotate the sessions recorded and train machine learning algorithms to classify or predict the patterns investigated.
    Original languageEnglish
    Title of host publicationProceedings of the Artificial Intelligence and Adaptive Education Conference - AIAED'19
    Place of PublicationBeijing, China
    Pages1-2
    Number of pages2
    Publication statusPublished - 23 May 2019
    Event4th International Conference on AI + Adaptive Education - Beijing, China
    Duration: 24 May 201925 May 2019
    https://www.easychair.org/cfp/AIAED-19

    Conference

    Conference4th International Conference on AI + Adaptive Education
    Abbreviated titleAIAED 2019
    Country/TerritoryChina
    CityBeijing
    Period24/05/1925/05/19
    Internet address

    Fingerprint

    Dive into the research topics of 'The Multimodal Learning Analytics Pipeline'. Together they form a unique fingerprint.

    Cite this