Eliciting formative assessment in peer review
DOI:
https://doi.org/10.17239/jowr-2012.04.02.5Keywords:
computer-supported peer-review, conceptual feedbackAbstract
Computer-supported peer review systems can support reviewers and authors in many different ways, including through the use of different kinds of reviewing criteria. It has become an increasingly important empirical question to determine whether reviewers are sensitive to different criteria and whether some kinds of criteria are more effective than others. In this work, we compared the differential effects of two types of rating prompts, each focused on a different set of criteria for evaluating writing: prompts that focus on domain-relevant aspects of writing composition versus prompts that focus on issues directly pertaining to the assigned problem and to the substantive issues under analysis. We found evidence that reviewers are sensitive to the differences between the two types of prompts, that reviewers distinguish among problem-specific issues but not among domain-writing ones; that both types of ratings correlate with instructor scores; and that problem-specific ratings are more likely to be helpful and informative to peer authors in that they are less redundant.Published
2012-10-15
How to Cite
Goldin, I. M., & Ashley, K. D. (2012). Eliciting formative assessment in peer review. Journal of Writing Research, 4(2), 203–237. https://doi.org/10.17239/jowr-2012.04.02.5
Issue
Section
Articles
License
Copyright (c) 2012 Ilya M. Goldin, Kevin D. Ashley
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 3.0 Unported License.