Abstract
This study investigated to what extent multimodal data can be used to detect mistakesduring Cardiopulmonary Resuscitation (CPR) training. We complemented the Laerdal QCPRResusciAnne manikin with the Multimodal Tutor for CPR, a multi-sensor system consisting ofa Microsoft Kinect for tracking body position and a Myo armband for collecting electromyograminformation. We collected multimodal data from 11 medical students, each of them performing twosessions of two-minute chest compressions (CCs). We gathered in total 5254 CCs that were all labelledaccording to five performance indicators, corresponding to common CPR training mistakes. Three outof five indicators, CC rate, CC depth and CC release, were assessed automatically by the ReusciAnnemanikin. The remaining two, related to arms and body position, were annotated manually by theresearch team. We trained five neural networks for classifying each of the five indicators. The resultsof the experiment show that multimodal data can provide accurate mistake detection as comparedto the ResusciAnne manikin baseline. We also show that the Multimodal Tutor for CPR can detectadditional CPR training mistakes such as the correct use of arms and body weight. Thus far, thesemistakes were identified only by human instructors. Finally, to investigate user feedback in the futureimplementations of the Multimodal Tutor for CPR, we conducted a questionnaire to collect valuablefeedback aspects of CPR training.
Original language | English |
---|---|
Article number | 3099 |
Number of pages | 20 |
Journal | Sensors |
Volume | 19 |
Issue number | 14 |
DOIs | |
Publication status | Published - 13 Jul 2019 |
Keywords
- activity recognition
- learning analytics
- medical simulation
- multimodal data
- neural networks
- psychomotor learning
- sensors
- signal processing
- training mistakes