Read Between the Lines: An Annotation Tool for Multimodal Data for Learning

Daniele Di Mitri, Jan Schneider, Roland Klemke, Marcus Specht, Hendrik Drachsler

    Research output: Chapter in Book/Report/Conference proceedingConference Article in proceedingAcademicpeer-review


    This paper introduces the Visual Inspection Tool (VIT) which supports researchers in the annotation of multimodal data as well as the processing and exploitation for learning purposes. While most of the existing Multimodal Learning Analytics (MMLA) solutions are tailor-made for specific learning tasks and sensors, the VIT addresses the data annotation for different types of learning tasks that can be captured with a customisable set of sensors in a flexible way. The VIT supports MMLA researchers in 1) triangulating multimodal data with video recordings; 2) segmenting the multimodal data into time-intervals and adding annotations to the time-intervals; 3) downloading the annotated dataset and using it for multimodal data analysis. The VIT is a crucial component that was so far missing in the available tools for MMLA research. By filling this gap we also identified an integrated workflow that characterises current MMLA research. We call this workflow the Multimodal Learning Analytics Pipeline, a toolkit for orchestration, the use and application of various MMLA tools.

    Original languageEnglish
    Title of host publicationLAK19
    Subtitle of host publicationProceedings of the 9th International Conference on Learning Analytics and Knowledge
    Place of PublicationNew York, NY, USA
    Number of pages10
    ISBN (Electronic)9781450362566
    ISBN (Print)9781450362566
    Publication statusPublished - 4 Mar 2019


    • Internet of Things, Learning Analytics, Multimodal data, sensors


    Dive into the research topics of 'Read Between the Lines: An Annotation Tool for Multimodal Data for Learning'. Together they form a unique fingerprint.

    Cite this