This document (D2.2.2) describes the LinkedUp consortium’s experience in developing and on- going improvement of the LinkedUp Evaluation Framework throughout three web open educational data competitions: Veni, Vidi, Vici. D2.2.2 is the final report regarding the Evaluation Framework (EF). It synthesises the work already done in the previous WP2 deliverables (D2.1, D2.2.1, D2.3.1, D2.3.2, D2.3.3) reporting on best practices, providing suggestions for improvements and possible adjustments to additional application areas. The initial version of the EF was developed by applying the Group Concept Mapping Methodology (GCM). It objectively identified through some advanced statistical techniques the shared vision of experts in the domain of technology-enhanced learning on the criteria and indicators of the EF. The GCM contributed to the construct and content validity of the EF. The first version of the EF was tested during the Learning Analytics and Knowledge Conference 2013 (LAK 13). After each competition round (Veni, Vidi, Vici) usefulness and ease of use of the EF were tested with a number of experts through a questionnaire and interviews. The analysis of the data suggested some improvements. In this final report of the EF we summarise the lessons-learned and provide six main suggestions for future data competitions developers: 1. Designing a data competition starts with a definition of evaluation criteria 2. Test the understandability of your evaluation criteria before publishing those 3. Do not use an ‘not applicable’ option for evaluation indicators 4. Less (indicators) are more (preferable) 5. Apply an unification of the scale of evaluation indicators’ 6. Weighting of important evaluation criteria can be very informative We finally present the final version of the LinkedUp EF and refer to the LinkedUp toolbox that provides all lessons-learned and further information for future data competition organisers.
|Publication status||Published - 31 Oct 2014|
- Linked Data
- data competition
- evaluation framework