Eliciting formative assessment in peer review

Authors

  • Ilya M. Goldin
  • Kevin D. Ashley

DOI:

https://doi.org/10.17239/jowr-2012.04.02.5

Keywords:

computer-supported peer-review, conceptual feedback

Abstract

Computer-supported peer review systems can support reviewers and authors in many different ways, including through the use of different kinds of reviewing criteria. It has become an increasingly important empirical question to determine whether reviewers are sensitive to different criteria and whether some kinds of criteria are more effective than others. In this work, we compared the differential effects of two types of rating prompts, each focused on a different set of criteria for evaluating writing: prompts that focus on domain-relevant aspects of writing composition versus prompts that focus on issues directly pertaining to the assigned problem and to the substantive issues under analysis. We found evidence that reviewers are sensitive to the differences between the two types of prompts, that reviewers distinguish among problem-specific issues but not among domain-writing ones; that both types of ratings correlate with instructor scores; and that problem-specific ratings are more likely to be helpful and informative to peer authors in that they are less redundant.

Published

2012-10-15

Issue

Section

Articles

How to Cite

Eliciting formative assessment in peer review. (2012). Journal of Writing Research, 4(2), 203-237. https://doi.org/10.17239/jowr-2012.04.02.5

Most read articles by the same author(s)