Reliability and validity test of a Scoring Rubric for Information Literacy

Jos van Helvoort, Saskia Brand-Gruwel, Frank Huysmans, Ellen Sjoer

    Research output: Contribution to journalArticleAcademicpeer-review

    Abstract

    The purpose of this paper is to measure reliability and validity of the Scoring Rubric for Information Literacy (Van Helvoort, 2010).Design/methodology/approach Percentages of agreement and Intraclass Correlation were used to describe interrater reliability. For the determination of construct validity factor analysis and reliability analysis were used. Criterion validity was calculated with Pearson correlations.Findings In the described case, the Scoring Rubric for Information Literacy appears to be a reliable and valid instrument for the assessment of information literate performance.Originality/value Reliability and validity are prerequisites to recommend a rubric for application. The results confirm that this Scoring Rubric for Information Literacy can be used in courses in higher education, not only for assessment purposes but also to foster learning.
    Original languageEnglish
    Pages (from-to)305-316
    Number of pages12
    JournalJournal of Documentation
    Volume73
    Issue number2
    DOIs
    Publication statusPublished - 13 Mar 2017

    Keywords

    • information literacy
    • assessment

    Fingerprint

    Dive into the research topics of 'Reliability and validity test of a Scoring Rubric for Information Literacy'. Together they form a unique fingerprint.

    Cite this