This implementation is important because it highlights some difficulties that might arise in implementing peer review. The main problem, as identified by the lecturers, was that the task on which the peer review was based was too simple. Groups of students produced a single question and individuals reviewed these group questions and provided feedback.  Also, while it is usually good that peer review is implemented as a regular activity in a course rather than as a one-off event - because there is a learning curve for students and it takes time for them to appreciate the benefits -  this was probably not a good idea in this implementation, given that it meant repeating a simple review activity on four times. It might have been better if the review activity had become progressively more complex. Also, while the design whereby students individually review group work is valuable in many contexts the large number of reviews that each group received perhaps did not result in much variation in feedback. The key lesson to be learned in this example is that peer review normally works best where the task is open-ended, complex and challenging. There must be opportunities for students to learn from seeing and evaluating other's work and from the feedback they receive. Otherwise student motivation is likely to be very low.

Another issue in this implementation was the lack of incentive to participate in peer review, as it was voluntary. Incentive could be increased by aligning the peer review to a summative assessment task or by evaluating the quality of the peer reviews. However, an appropriate and challenging task would also increase motivation to participate.

David Nicol has supplied this commentary and the views expressed are his personal opinions only. Readers should bear in mind that the design of learning tasks is a complex process and that each design decision has varying and multiple effects.