Computing Science: Distributed Information Management

The Distributed Information Management course aims to teach students about understanding and managing thecomplexities of the development of web applications. The course is taken by 60 third-year undergraduate computing students along with 31 students studying Masters programmes. The teaching structure consists of two one-hour lectures per week and a two-hour laboratory.  Students work in small groups to first specify, design and then implement a realistic distributed web application such as a mash up, micro-blogging site or a search portal.  The first group assignment is the specification of the web application, which is peer-reviewed. Peer review involves students individually providing feedback comments on the designs produced by other groups. Subsequently, the groups use this feedback to enhance their own design, and its implementation. 

Peer review was introduced to provide students with direct experience of how designs are developed in industry where developers invariably receive feedback from multiple stakeholders. 

AT A GLANCE

School: School of Computing Science

Module: Distributed Information Management Systems and Internet Technologies

Students: 60 Level 3 students and 31 Masters Level

Task: The task, on which peer review was focused, was a draft specification and design of a web application.  In small groups (3-5 members) students co-author a report on the draft specification.  Each student then reviews reports from two other groups. Hence, each group receives around eight to ten feedback reviews of their draft specification.  Formative feedback is also provided by the lecturers and teaching assistants during a meeting with the group held during the weekly laboratory sessions.  For the final specification and design report, each group is asked to submit a summary of the feedback that they received and to provide a reflective commentary explaining how they used that feedback.

Peer Review: Students are provided with a rubric to guide their peer reviewing.  The rubric directs the reviewer to consider: completeness, clarity, use of diagrams, and thoughtfulness of the design, and then to assess how useful the report is from the perspectives of a developer, a client, and a boss.  These perspectives are discussed beforehand in the lectures where the teacher emphasizes that the rubric is intended to help develop in students a sense of audience and an appreciation of different stakeholders’ perspectives in design

Findings: The evaluation comprised a short survey, a comparison of the differences in marks between the inital and final report, an analysis of the reflective reports detailing how groups had used the feedback and the written commentaries that students had provided on other groups' designs. Students reported that reviewing the designs produced by other groups was benficial and that peers provided high quality feedback. The group marks for the designs improved from a mean of 60% to a mean of 76% between initial draft and final submission. Students reflective reports showed high levels of engagement and use of peer feedback. The commentary data has not been analysed to date but inspection shows that most students engaged in high-level evaluation and provided rich feedback on the designs they reviewed.

Software: Aropa developed by John Hamer

Course Leader: Dr Leif Azzzopardi, Computing Science, University of Glasgow, Leif.Azzopardi@glasgow.ac.uk

More detail here