Conference time: -
REAP Conference Fora (in programme order)
Subject: How can we do it effectively?

You are not authorized to post a reply.   
Author Messages  
David Boud
Posts: 5

29/05/2007 05:14  
I appreciate you viewing my keynote presentation. I’d be pleased to discuss any issues that have arisen for you from it. My emphasis has been on how we should view assessment primarily through the influence it has on student learning and the formation of the capacity of learners to make judgements about their work. While not dismissing the need for assessment to contribute to certification, I believe that this purpose has dominated and distorted assessment for too long and that we need now to place effort in creating designs for assessment that seriously engage with learning and which support the intentions we have for learners. My preference for the discussion is to consider how we might effectively do this rather than on whether it is worth doing.
Victor Hendricken
Posts: 1

30/05/2007 12:05  
I am not sure where this question fits in or if it does at all, so I'll start here. Is test item analysis still used as a tool for theoretically increasing test effectiveness? I am not strong in using statistical analysis procedures so am wondering if my own bias prevents me from seeing merit in test item analysis. How can stastical test item analysis contribute to students' learning, I wonder?

I am meeting with a group of college instructors in a few weeks who are looking for a "foolproof" method of analysing (mostly multiple choice) tests. I want to try to show them the value of including students in the assessment process prior to delivering tests and exams.

Thanks for listening and pointing me in the right direction.
Rebecca Sisk
Posts: 5

30/05/2007 14:01  
I have used point-biserials but you are left wirh the question of how you use the information if you find out you have a "questionable question." Try it again? throw it out? give the students the points? revise? I think the standard is to revise and test again but haven't seen a lot of literature about the issue.

As to contributing to students' learning, I can see that it might, in a discussion of the nuances of the responses to an item that is questionable, but there are probably better ways to assess formatively than examining the merits of an exam. This gets to your point--test reliability as an issue is more important at the summative end, where a fair test is important when used for grading.

If you find a foolproof method of analysis, please share with the world!
Margot McNeill
Posts: 2

31/05/2007 00:46  
Hi Victor,
Perhaps this would provide a good opportunity to influence the team you're working with about the importance of constructing items with a focus on higher order learning. David Nicol's new paper:
E‐assessment by design: using multiple‐choice tests to good effect
D Nicol - Journal of Further and Higher Education, 2007 - Taylor & Francis (sorry it's not linked)
could help.
Also, Anderson and Krathwohl's Revised Bloom's taxonomy can be useful in plotting just which types of learning outcomes and processes the items are aiming for!

A Taxonomy for Learning, Teaching, and Assessing: A Revision of Bloom's Taxonomy of Educational …
DR Krathwohl, LW Anderson - 2001 - Longman

Cheers,
Margot

Rosalind Duhs
Posts: 3

31/05/2007 09:22  
Could I ask a question about the relationship between higher order learning and grading? The premise is that higher order learning is what we want and that the focus on a desire to grade reliably can run counter to achieving it, as mentioned by David. (Thank you, the keynote was so interesting and useful!) Sweden has long had a distinction-pass-fail (3-grade) grading system for most courses. This has been regarded as a way of avoiding quantitative approaches to knowledge 'accumulation' and testing. The fear now is that with the introduction (in some universities) of a seven-grade scale promted by the Bologna Process, higher order learning will suffer as teachers unaccustomed to assessing student performance in this way will turn to objective testing. Does anyone have any comments on this? Can we (and our students!) write grading criteria which might promote higher order learning, while at the same time helping assessors to feel confident that they're assessing reliably?
Rosalind
David Boud
Posts: 5

31/05/2007 11:05  
I would be delighted to have the Swedish 3 grade system as that is all that is needed for most summative purposes. We have five in Australia, which I think is too many. Seven is excessive. Attention gets shifted in systems with more grades away from what is and is not needed to fine (often invisible) distinctions between grades that have no meaningful counterpart in the world outside grading. It shifts from qualitative differences of judgements (which relate to substantive issues of learning) to minor differences in bits of knowledge (which may not be related to matters of substance at all). The seven grade system (is it really intrinsic to the agreement?) is the greatest negative in an otherwise quite useful coordination.
Mantz Yorke
Posts: 10

31/05/2007 11:06  
I have been scratching around in the literature on grading for a little while, and am increasingly concerned about the lack of robustness of overall assessments (GPAs, degree clasifications etc). The same applies at less aggregated levels. Complex learning, as my late friend Peter Knight would have wanted to argue here, doesn't lend itself to metrication - but can be judged (he & others address the theme of judgement in Boud & Falchikov, 1997). Yet many interested parties crave the simplicity of numbers. I think that there's a considerable need to educate the interested parties as to what is defensible in terms of grading, and what may be better conveyed by other means. There is some confusion between norm- and criterion-referencing, too, when the European Credit Accumulation & Transfer System uses norm-referenced bands and universities don't grade according to the curve.
Mantz Yorke
Posts: 10

31/05/2007 11:29  
Sorry - in a timewarp there - Boud & Falchikov's book is 2007. This isn't the first time I've slipped back a decade in citing: why do I do that?
You are not authorized to post a reply.  
Forums > Keynotes > David Boud > How can we do it effectively?



ActiveForums 3.6