TY - JOUR
T1 - Towards understanding students’ sensemaking of test case design
AU - Doorn, Niels
AU - Vos, Tanja E.J.
AU - Marín, Beatriz
N1 - Funding Information:
We like to thank Silvio Cacace for his input on the assignment, and for the use of the TestCompass tool, Erik Barendsen for his advice and feedback on this research and of course the participating students from the bachelor GTDM at the Universitat Politècnica de València. The work leading to this paper has received funding from the European Union by the ENACTEST project (101055874) and the Erasmus+ project QPeD under contract number 2020-1-NL01-KA203-064626.
Funding Information:
We like to thank Silvio Cacace for his input on the assignment, and for the use of the TestCompass tool, Erik Barendsen for his advice and feedback on this research and of course the participating students from the bachelor GTDM at the Universitat Politècnica de València. The work leading to this paper has received funding from the European Union by the ENACTEST project (101055874) and the Erasmus project QPeD under contract number 2020-1-NL01-KA203-064626 .
Publisher Copyright:
© 2023 The Author(s)
PY - 2023/7
Y1 - 2023/7
N2 - Context: Software testing is the most used technique for quality assurance in industry. However, in computer science education software testing is still treated as a second-class citizen and students are unable to test their software well enough. One reason for this is that teaching the subject of software testing is difficult as it is a complex intellectual activity for which students need to allocate multiple cognitive resources at the same time. A myriad of primary and secondary studies have tried to solve this problem in education, however still with very limited results. Objective: Before we can design interventions to improve our pedagogical approaches, we need to gain more in-depth understanding and recognition of sensemaking as it is happening when students design test cases. Method: An initial exploratory study identified four different sensemaking approaches used by students while creating test models. In this paper we present a follow-up study with 50 students from a large university in Spain. The used methodology was based on the previous study with the improvements that originated from its evaluation. We asked the participants to create a test model based on a description of a test problem using a specialized web-based tool for modeling test cases. We measured how well these models fit the test problem, the sensemaking process that students went through when creating the models, and the students’ perception of the modeling task. The participants received no compensation for their efforts, and we scheduled the experiment during a regular class. Apart from the created models and their metadata, we also collected recordings of the students’ computer screens made during the experiment and used a questionnaire to study their perspectives on the assignment. All the collected textual, graphical, and video data was analyzed using an iterative inductive analysis process to allow new information about the different sensemaking approaches to emerge. Results: We gained better insights into the sensemaking processes of students while modeling test cases for a problem. The results enabled us to refine our previous findings, and we identified new sensemaking approaches. Conclusions: Based on these results, we can further investigate ways to influence the sensemaking process in education, the possible misconceptions that have a negative influence on it, and the desired mental model we want our students to have to design test cases.
AB - Context: Software testing is the most used technique for quality assurance in industry. However, in computer science education software testing is still treated as a second-class citizen and students are unable to test their software well enough. One reason for this is that teaching the subject of software testing is difficult as it is a complex intellectual activity for which students need to allocate multiple cognitive resources at the same time. A myriad of primary and secondary studies have tried to solve this problem in education, however still with very limited results. Objective: Before we can design interventions to improve our pedagogical approaches, we need to gain more in-depth understanding and recognition of sensemaking as it is happening when students design test cases. Method: An initial exploratory study identified four different sensemaking approaches used by students while creating test models. In this paper we present a follow-up study with 50 students from a large university in Spain. The used methodology was based on the previous study with the improvements that originated from its evaluation. We asked the participants to create a test model based on a description of a test problem using a specialized web-based tool for modeling test cases. We measured how well these models fit the test problem, the sensemaking process that students went through when creating the models, and the students’ perception of the modeling task. The participants received no compensation for their efforts, and we scheduled the experiment during a regular class. Apart from the created models and their metadata, we also collected recordings of the students’ computer screens made during the experiment and used a questionnaire to study their perspectives on the assignment. All the collected textual, graphical, and video data was analyzed using an iterative inductive analysis process to allow new information about the different sensemaking approaches to emerge. Results: We gained better insights into the sensemaking processes of students while modeling test cases for a problem. The results enabled us to refine our previous findings, and we identified new sensemaking approaches. Conclusions: Based on these results, we can further investigate ways to influence the sensemaking process in education, the possible misconceptions that have a negative influence on it, and the desired mental model we want our students to have to design test cases.
KW - Computer science educational research
KW - Higher education
KW - Sensemaking
KW - Software Engineering
KW - Software Testing
UR - http://www.scopus.com/inward/record.url?scp=85161866807&partnerID=8YFLogxK
U2 - 10.1016/j.datak.2023.102199
DO - 10.1016/j.datak.2023.102199
M3 - Article
AN - SCOPUS:85161866807
SN - 0169-023X
VL - 146
JO - Data and Knowledge Engineering
JF - Data and Knowledge Engineering
M1 - 102199
ER -