Conference time: -
REAP Conference Fora (in programme order)
Subject: Collaborative writing session: Facilitator review

You are not authorized to post a reply.   
Author Messages  
Catherine Owen
Posts: 27

23/05/2007 12:21  
Commentary on:
Quintin Cutts: Essay Writing with Peer Reviewing and Marking
Nandini Das and Stuart McGugan: Shakespeare: Page, Stage, Screen

Catherine Owen, Session Facilitator


Both of these case studies describe assessment activities that represent divergence from disciplinary orthodoxies. In computing science, collaborative work is commonplace, but there is traditionally little emphasis on formal writing skills. In English literature studies, assignments are rarely collaborative and the assessment of group working skills uncommon.

The authors of both case studies point to the need to address employability skills agendas as the driver to introduce different types of tasks into their courses. For Nandini Das and Stuart McGugan, collaborative writing activities and the evaluation and assessment of team effort are a response to the HEA’s 2004 student employability profile for English studies graduates which identified skills weaknesses in team working, problem-solving, time management, working under pressure and computer literacy. The case study describes a group writing exercise that requires students to create either a new critical edition or performance text based on a passage from a Shakespeare play. As well as collaborating on the production of the text and a commentary on the joint decisions that lead to the creation of a definitive version, students are asked to reflect on their experience of group working by completing a pre-defined group assessment sheet and collaboratively assigning a percentage share of marks to their peers based on contribution to the task.

The criteria by which students peer assess and reflect on their experiences is pre-defined by tutors. Students are asked about the process and experience of working together, rather than the process of developing the content of their work. The criteria employed embody assumptions about effective team working that appear authentic in that they replicate aspects of team behaviour in the workplace (for example, the emphasis on efficiency implied in the statement “the group avoided duplication of tasks”). However, another case study in this conference (Baxter: A Case Study of Online Collaborative Work in a Large First Year Psychology Class) describes an effective group working design in which all students are required to undertake all the task in order to be able to provide useful feedback to each other. It’s clear that criteria are unlikely to be neutral in this respect and it would be interesting to speculate on whether class debate about what is important and authentic about this kind of task and its relationship to employability skills might lead to the development of a different set of indicators (and perhaps different criteria for each anual cohort or indeed each group within the cohort). What is clearly authentic about this task is its relationship both to the demands of the subject discipline and the way in which students are encouraged to work together through a complex task to achieve an acceptable negotiated compromise around a text.


Questions for Nandini and Stuart:

1. What incentive is there for students to be honest when they use the criteria sheet to assess the group experience? Might they be tempted to report that their group was wholly functional?

2. Do you think that the pre-defined criteria embody assumptions that might not reflect students’ subsequent experiences? Would asking students to define their own criteria add value to the task?

3. How do you deal with dissent when group members are asked to calibrate individual contributions to the task? How do group members assess the quality of contributions?

4. Do you think the earlier introduction of group working (perhaps in year one) would have a positive impact on the student experience? Are other tutors in the department planning similar initiatives?

The employability agenda is also a driver for the design of Quintin Cutt’s computing science module. Here too, students are required to collaborate in the production of a text (in this case, a formal essay) but the collaboration is not managed in groups. Individuals write a draft essay which is then reviewed by three other students in the class. The reviews themselves are subjected to scrutiny and assessment by other students and the original essay draft is revised in line with peer comments where these are deemed valid by the author. Students are asked to make a response to reviewers and these responses are also marked by student peers. The final revised essay is given a summative mark by tutors.

The employability rationale here is primarily one of responsibility. Students have to participate in all elements of the process of review, reflection and revision in order to ensure the process is effective for all class members. They in turn receive substantially more feedback on their work and on their contribution to the process than would be possible if feedback was solely tutor-generated. It could be argued that the responsibility imperative is undermined because the tutor is the final arbiter of the summative mark for each essay.

In common with the Das and McGugan case study, the criteria used by students is pre-defined by the tutor, although it is implied that a significant amount of time is spent in class discussing what is important and clarifying expectations. Comments from student evaluations suggest however that the feedback received by students from their peers is not always of a high enough quality to be useful. Perhaps students might benefit from a more active engagement with criteria development (for example, the case described by Rosario Hernandez in her paper for this conference).

A more troublesome aspect of this case study is the perceived lack of authenticity from some students who cannot see the link between writing a formal essay and the kinds of tasks that they might be asked to undertake in the workplace. Cutts’ solution of asking former students to return to explain their real-life work experiences and the relevance of the class is an attractive one, but perhaps a re-design of the task which asks students to collaborate on a technical report or other form of writing might also be helpful. An earlier introduction of similar exercises into the course (perhaps even in first year) might also help to embed these kinds of working practices.


Questions for Quintin:

1. Why do you think it’s important that the tutor awards the final summative mark for the essay? Does this undermine student responsibility?

2. Do you think that students would provide better feedback to each other if they were asked to define their own criteria?

3. The process you describe seems very complex. What are the key elements you would keep if you were asked to simplify it? Do you think that the iterative nature of the task is helpful?

4. What would you change to make the task appear more immediately authentic to final year students concerned about employability?
James Derounian
Posts: 6

25/05/2007 16:23  
Catherine/colleagues - a short case study to throw in to the mix:

I have anywhere between 50-100 first year undergraduates taking a module 'Action with Communities'; their 1st - online collaborative writing - assignment asks them in groups of 3-4 to "Discuss & explain the principles of community development". The latter cover things like inclusion/exclusion, negotiation, networking, integration etc.

Basically the student group practises the very principles they are writing about....so they are experiencing the joys and pitfalls of community development through their collaborative writing.

The students may never (physically) meet in the production of their collaborative essay! My conclusion, over 10 years - 'the process is the product' in the sense that the quality of the essay invariably reflects the quality of the group interactions - how badly/well they got on with eachother!
Catherine Owen
Posts: 27

28/05/2007 17:25  
Thanks James - really interesting stuff.

Another case study (forthcoming) from the REAP project suggests that students in group projects having difficulties with group dynamics (or wishing to swap to be with friends) often regret their decision to move because dysfynctional groups can produce good work. There's obviously a lot of complexity in this mix!
Quintin Cutts
Posts: 8

29/05/2007 15:32  
Answer to Catherine's Q1 (Very thought provoking question!)
"Why do you think it’s important that the tutor awards the final summative mark for the essay? Does this undermine student responsibility?"

Immediate response is: the students wouldn’t like it – they feel nervous enough doing the much smaller assessment tasks – and this is one of the first times they’ve been asked to do anything like this.
But a little more thought on the nature of the formative reviews completed by students indicates how far apart they can be, on the same essay. Some of the students would know this, having seen the three reviews of their own essay draft - undermining confidence. At the moment I don’t have a calibration process – and an automatic averaging of say three marks doesn’t sit well with me either. One of the de-stressers of the current system is that the students can take or leave the reviews they get – they still have ownership of their work. Yes, this is handed over, in a sense, when they pass the final version to me – but then when staff mark it, they take care to calibrate their marking process.

Perhaps I could move to final marking by students if I were to radically adjust (a) how the criteria were set, with fuller student involvement/engagement, (b) the amount of practice the students had in applying the criteria beforehand, and (c) the level of feedback on (b) thereby hopefully resulting in a successful calibration of their marking
Quintin Cutts
Posts: 8

29/05/2007 15:43  
Answer to Catherine's Q2:
2. Do you think that students would provide better feedback to each other if they were asked to define their own criteria?

Yes definitely – or rather, if we were to set the criteria collaboratively. As an example, currently students are keen not to be assessed on spelling/grammar etc. Now, this is something I’m not keen to trade on – but I see where you’re heading: unless I can persuade the students that spelling/grammar is important in their future lives, then why will they take it seriously – either as an author or as a reviewer.

I think the equally important aspect is that they understand the criteria more deeply than they do at present (hangs head…). This will require more work/practice/discussion before starting the whole reviewing process. Being able to see in writing how students evaluate an essay, and then seeing how they respond to feedback on their own, is incredibly insightful into their understanding of the whole process – and (hangs head again…) how different it is to the understanding I want them to have. Clearly, a number of students still do not understand the nature of a strong argument.
Mary Welsh
Posts: 12

29/05/2007 15:58  
Oh how I agree with that final comment from Qunitin! I do believe we must try to do more to develop in our students an understanding of the way in which an argument is constructed. It is a process which can be learned - OK, results may not be very original at first but I see nothing wrong in allowing "apprentice" writers to use a sort of adult "writing frame" with which to practice before moving on to more origninal structures. Of cours different disciplines may require different structures for an argument to be effective but we all have to start somewhere. Maybe we need to work out why texts like "Freakonomics" is so valued by so many and apply that analysis to ensuring that our criteria are as well constructed as some of the arguments in that text. Criteria need to be explicit and written in langauge students understand.
Quintin Cutts
Posts: 8

29/05/2007 16:30  
Answer to Catherine's Q3:
The process you describe seems very complex. What are the key elements you would keep if you were asked to simplify it? Do you think that the iterative nature of the task is helpful?

The aim is to give students multiple opportunities to evaluate the quality of an argument, and since these are iterative in nature (as outlined below), I hope that they develop their critical evaluation skills as they go (so yes, iterative is helpful):

1. during the development of their own draft
2. while reviewing the three essays by other students
3. while marking three reviews (they must assess the quality of 3 revs against the original essays)
4. analyzing reviews of their essay, to decide which to accept/which to reject
5. formalizing this as a response to reviewers
6. assessing the r-to-r – analyzing arguments in original essay, reviews and r-to-r

In principle, this much practice would seem to be a good thing. In practice however, I think that many students simply don’t understand the nature of a sound argument well enough, and hence the whole process is flawed. This is my major realization this year, now that I've had more time to look at the results – that much more work needs to go into this aspect in the coming session.

The process could be simplified by removing the response to reviewers part – which is in any case only done by the Masters segment of the class. Something would be lost here, though, in the whole design, as the formal acceptance/analysis of the reviews received is a fundamental part of the process. The students are being asked to go beyond the blind acceptance of feedback (as is usual when it comes from “the teacher”) – and to actively evaluate their peers’ feedback. Of course, the automatic tendency is not to trust their peers’ comments, particularly when it is negative, since they don’t rate themselves (and hence their peers) as authorities.

Finally. The process seems complex, but the students don’t get lost in it – and each item is not too long. The course runs over two terms – and they really can’t learn this skill (of developing/analyzing arguments) by reading alone – they must do it.

Yo, thank you so much for these questions, Catherine, they are sharpening my understanding a lot and giving me insight into what to do differently next time… principally, to explain the whole process much more clearly to the students…
Quintin Cutts
Posts: 8

29/05/2007 16:32  
Finally, Catherine's 4th question
What would you change to make the task appear more immediately authentic to final year students concerned about e

I will be sure to get external speakers early in the course from industry who can comment on the value of this kind of task. And I can attempt to use more relevant scenarios – although many of them are relevant to most students…
Catherine Owen
Posts: 27

29/05/2007 16:42  
Hi Quintin.

Thanks so much for these lengthy and thoughtful responses to my sticky questions! I was fascinated to read about this class. One of the things that really struck me was that students seemed to think that the class was easy in their evaluation of it. I thought it sounded incredibly challenging! Do you think this was because computing science students found essay writing and reviewing easier than writing complex code (which sounds reasonable to me!) or was it because they understimated the complexity and depth of the class?

Catherine
Quintin Cutts
Posts: 8

29/05/2007 17:01  
Oh, how I wish it was the former, and that the skills required for writing complex code formed a superset containing those required for essay writing/reviewing. But no, I am confident that many students underestimated the complexity/depth of the class :-(

I think I'm only beginning to realise how challenging the course is myself!
Tracey Leacock
Posts: 3

29/05/2007 19:11  
Hi Nandini, Stuart, & Catherine, I’d like to comment on some of the questions Catherine has raised about this case study. I have a fair bit of anecdotal experience in using team work of various sorts in my teaching, but (much to my frustration), I haven’t yet had a chance to systematize these experiences into research. For this reason, I am very interested & excited to see that such work is happening!

As some of my comments may get a bit long, I’ll make separate posts for each question. Here’s the first.

Catherine asked whether students would be tempted to report that their teams were more functional than was actually the case. In my experience, this does not happen. Rather, teams are glad of the opportunity to document / report problems that they experienced. I have wondered if this is a means of dealing with the frustration that is bound to occur (even in a high-functioning team)? By the end of term, my teams are usually either happy with their group work or looking forward to a future chance to try a different approach that may work better, but either way, they seem to be willing to share both strengths and weaknesses.

I’d be very interested to know if Nandini & Stuart or others have looked into when & why teams are willing vs. not willing to share this information?

Thanks,
Tracey (Canada)
Tracey Leacock
Posts: 3

29/05/2007 19:13  
Hi all, relating to Catherine’s Question 3 - I see that Nandini & Stuart ask for teams to provide the % contributions of each team member, which is also an approach I take. (I also ask teams to supply an explanation of the contributions & how they resulted in the reported percentages, in addition to the number.) I have had one instance when this percentage allocation backfired. One team member decided to give up on the course (but did not drop it). He was quite happy to meet with his team just long enough to sign the form showing that he had done zero % of the work. Given the mark allocation scheme I had at the time, this turned out to be a problem.

So, a question for Nandini & Stuart: if a team project is evaluated as being worth (for example) 80 marks out of a possible 100, and the team members say they contributed 20%, 25%, 25%, and 30%, how do you allocate the grades? Does the 30% student receive a grade higher than 80/100? Do those at 25% receive a grade lower than 80/100?

I believe this is an important question because it’s often not realistic or fair to demand that all students contribute equally – this is a key reason so many strong students are nervous about group work. If a student who just wants to pass (say s/he is taking a heavy course load, working part time, & looking after kids at home, as my students often are) is teamed with a student who wants a top grade, then they will need to work out an approach that fits for both of them. This will likely mean that the top student will put in more work. So long as the team as a whole agree to an approach that works for them & uses effective team management to meet their goals, it *shouldn’t* be a problem that their contributions are unequal. It becomes a problem if all get the same grade, or if those who contributed 25% get punished because they didn’t contribute 30%, etc.

So, I’d like to know more about the approach(es) you take to actually allocating the grades, what problems you may have encountered, and also whether the students know the details of the process at the outset?

Thanks,
Tracey
Tracey Leacock
Posts: 3

29/05/2007 19:16  
A note on Catherine’s question 4 to Nandini & Stuart: Yes! The earlier effective group work can be *integrated* (key word!) into the curriculum, the better! I’d love to see students coming into university with strong team management skills. However, their first year of university is a golden opportunity to introduce such concepts. Students are in a new environment, and they haven’t yet learned the norms for that environment. I used to teach in a program that placed a strong emphasis on introducing team management skills via, not just group work, but reflection on group work, tools to support group work (e.g. team contracts), etc. Some aspects of the program sound similar to the great approach James Derounian describes in his posting.

Then, throughout their 4 year program, students continued to work in groups in many classes. By 4th year, they were very sophisticated in this respect & knew how to set up clear team expectations, ensure clear communication, address conflicts … and get their projects done! It was a shock to then teach our new graduate students – as grad students, they had a higher level of content knowledge, but they simply hadn’t had the exposure to effective group work.

Group work requires skills beyond those used to do solo work. So long as it’s “tacked on” in one or two courses, it really is an uphill battle to find ways to make it both fair and meaningful to students. I’m not suggesting all coursework should be team-based, but if we want to prepare students for the workplace, then teamwork skills should be just as much a part of what we teach students as solo work skills.

Tracey (Canada)
Stuart McGugan
Posts: 5

30/05/2007 13:52  
Catherine

Thanks for your questions. I've not had a chance to catch up with Nandini but here are my thoughts

Q1

What incentive is there for the students to be honest?
This assumes students need an incentive to be honest. Perhaps they do but the aspiration was to promote greater learner responsibility. I'm sure they are tempted do 'claim' the group was wholly functional but perhaps there was some self reflection along the way

Q2
Good point. The pre-determined criteria may be more 'meaningful' to some students than to others. I wonder if it is possible to develop criteria that capture all the subtleties of the experience.
Interested in the idea of students defining their own criteria and am trying to think about how to put this into practice. Any suggestions?

Q3 In the first year of running dissent wasn't an issue but there has been a bit more dissatisfaction this year (Nandini can correct me if I'm wrong)
The weighting is 30% group with two individual pieces 30% (essay) and 40% (exam). A key driver was the formative function on the exercise. Doing it in a group format allowed for feedback to be given back to students quickly, which they could then use in the individual essay. Closing the feedback loop, so to speak, and the quality of the work does seem to have improved. An exam question is also linked to the exercise

Q4 Yes I agree entirely. The sooner students experience collaborative work the better. I believe similar initiatives are underway in the Department

Thanks

Stuart
Stuart McGugan
Posts: 5

30/05/2007 14:16  
Hi Tracy

I think the point you make about make about giving students the opportunity ... is a good one. That's probably the way we see it here. As educators we can provide students with learning opportunities and I think we have to trust them to use these opportunities wisely - in my experience most do

The assessment is only in it's second year so building up an picture of behaviour is tricky. Group dynamics are a bit different this year than in previous years

Thanks
Stuart
Stuart McGugan
Posts: 5

30/05/2007 14:26  
Hi Tracey,

Just received email from Nandini

To clarify - if all students contribute equally, they each get the marks awarded to the work. In cases of unequal distribution, marks awarded are weighted accordingly. Students know about this from the module information right from the beginning of the course.

In problem cases will have group and one-to-one meetings to sort out things if absolutely necessary. One person quit in a group --in group meeting agreed his contribution would be put down as 0%.

Hopes this make sense
Stuart

Stuart McGugan
Posts: 5

30/05/2007 14:43  
if anyone is interested there is another case about the learning that that dysfunctional online groups can genrate

See
McConnel, D. (2005) Examining the dynamics of networked e-learning groups 'Studies in Higher Education' 30:1 25-42

Fran Everingham
Posts: 14

30/05/2007 16:22  
Students swapping to friendship groups is not a problem in truly distance in-line group work - it's rare for students to know each other. The bigger issue is finding effective ways for establishing groups that supports the group functioning. Ive tried various permutations: alphabetical list; geographical proximity on the basis that students might feel a greater sense of closeness if they share location such as same state (even though they may still be 200-300 kms away from each other). Inevitably that leaves the outriders clumped together - their common bond being located on the east and west coast of Australia, and say Singapore. I have tried mix by profession (same or different), mix by culture (same or different) ; gender balance and so forth. Now, I use a preliminary exercise of some sort to engage the large group in an online discussion relevant to the unit of study. Through this a list of topics is identified and individuals can sign into a topic of interest. When the group number hits five a second group can be established if there is enough interest. So in this way, the establishment of groups is sorted out by the students and the basis of their assignment work, a shared interest, is established. In my observation time allocated to the large group discussion phase is well spent. In turn, it appears to significantly reduce the time spent by students trying to 'form' the small group and negotiate tasks to be tackled.
James Derounian
Posts: 6

30/05/2007 17:01  
I agree with Fran regarding the importance of generating a 'community of interest' - as a basis for productive group working. It is, after all what we do in life - citizens/work colleagues collaborating pursuit of a common goal. On the other hand....(!)

It can be stimulating, valuable, challenging and mind expanding to work with others who bring very different skills and interests to the table - particularly (perhaps)in relation to interdisciplinary working say on issues around sustainability (where you might team up students with economic, social, cultural and environmental subject knowledge).

James
(University of Gloucestershire UK)
Stuart McGugan
Posts: 5

31/05/2007 08:38  
Hello Quentin

You make a good points about the about student engagement with criteria should you wish to move to final marking by students.
I also think certain criteria might be more appropriate for students to work with (at least initially) and lead to better reliability. I've used criteria based on oral presentation which most find relatively easy to work with. Judging 'higher level' attributes of written work I think is much trickier with the potential for greater variability

Mind you the same probably applies for a lot of staff

Stuart
You are not authorized to post a reply.  
Forums > Collaborative writing Session > Collaborative writing in divergent disciplines > Collaborative writing session: Facilitator review



ActiveForums 3.6