AbstractComputer-supported collaborative learning (CSCL) technologies are used to implement social-constructivist learning theories supporting students in active and collaborative knowledge construction by encouraging students to share and discuss knowledge and arguments. The use of Social Annotation (SA) tools, in which students write annotations and engage in discussions, fits this process well. However, having students work in a CSCL environment, does not mean they automatically participate in argumentative discussions. Previous research suggests that scaffolding students’ behavior through collaborations scripts (instructions towards collaboration and discussion) encourages students to engage in more meaningful, high-quality discussions and interactions.
This study aims to examine how and to which extent students doing assignments in an SA tool need to be supported for them to engage in collaboration through online, argumentative discussions more often and whether this improves the quality of their annotations.
Participants, procedure, design
In this study an experiment took place in a second-year course of a Dutch university in weekly assignments in the SA tool Perusall (n=59). During the experiment the control group received normal instructions, while the experimental group received both normal instructions and additional scaffolding in the form of collaboration scripts.
This study had a quasi-experimental, repeated measures design, thus measurements from three different assignments (baseline, after the intervention and after the intervention had been faded out) were taken. In this study we measured the percentages of annotations that students wrote in response to fellow students. We also examined the quality of the annotations scored on the levels of Bloom’s (revised) taxonomy. Finally, we wanted to validate the Linguistic Inquiry and Word Count tool (LIWC2015) combined with a list of Bloom’s verbs for scoring annotations, by comparing the scores of the LIWC2015-tool on a sample of annotations to the scores of three human raters. We were unable to validate this instrument, meaning the annotations were scored manually by the researcher.
The results of this study showed significant differences in the percentages of annotations written as a response to fellow students between the experimental and control group after the intervention. However, these differences were caused both by an increase in scores of the experimental group as well as a decrease in scores of the control group. Furthermore, we could not find significant differences within the experimental group over time in the percentages of annotations written as a response to fellow students. For our qualitative analysis we grouped the annotations of students on the lower and higher Bloom-levels of cognitive processing and calculated percentages of annotations on the higher levels. When analyzing these we found the experimental group scored significantly higher on the higher levels of cognitive processing then the control group after our intervention. This effect did not remain after the scaffolding was fully faded. We also found no correlations between student scores on percentages of interactions and percentage of annotations on the higher levels of cognitive processing.
This study could not confirm that the use of collaboration scripts significantly increased the number of interactions between students while working on an assignment in a SA tool. It did show students in the experimental group scored higher on the levels of Bloom’s revised taxonomy after the intervention. However, this effect did not remain over time after the scaffolding had been faded out.
|Date of Award||16 Oct 2020|
|Supervisor||Howard Spoelstra (Supervisor)|