Conference time: -
REAP Conference Fora (in programme order)
Subject: Comments on papers; questions for authors and participants

You are not authorized to post a reply.   
Author Messages  
Stephen Ehrmann
Posts: 11

28/05/2007 18:45  

Review of Gray and McKitrick REAP papers and Questions for Discussion

Stephen C. Ehrmann, Director, The Flashlight Program, The TLT Group (http://www.tltgroup.org/flashlightP.htm)

Both papers argue that involvement of the instructional staff is key to institutionalization of assessment.

 At the Naval Academy, a process of institution-wide goal setting (1999) was followed by an effort to encourage instructors to incorporate enhanced in their courses and departments. The effort began with training for interested volunteers who were also seen as opinion leaders in their disciplines.  Tactics included travel by volunteers to conferences and other institutions, local workshops, colloquia, poster sessions, and development of assessment plans for programs and departments.   An office of Academic Assessment was created and its director worked with the Director of Teaching and Learning.  This process resulted in changes in assessment if departments found that their initial plans were unworkable and also led, “in a number of cases, to course or curricular changes.” The process described by Gray is quite systematic, but the paper was too brief to understand whether it has had success in creating a cycle of success and dissemination among faculty, or in fostering improved student learning.  . 

The McKitrick paper provides more detail about both the goals being assessed and the methods in use. In contrast to the Naval Academy’s focus on departments, SUNY Binghamton worked on general education and, in this paper, specifically on critical thinking in general education courses. Program leaders engaged interested instructors in discussions of student progress toward institutional goals. They did so through a Delphi survey (two surveys, with the second being used to create consensus by feeding back results from the first) and meetings.  Evidence was gathered through instructors, the library, internship supervisors and other sources.  The discussion has led to plans for assessment of ‘learning in upper-division critical thinking and information management courses.’  There were no examples of specific changes in how critical thinking was being assessed.  McKitrick does mention, however, that assessment of information literacy has led to insight into some specific problems and changes in teaching.

Questions for the authors

  1. Peter, I couldn’t tell whether the effort in the first half of the decade at the Naval Academy was mainly to assess courses and programs on their own terms and/or whether the effort was focused on assessment of progress toward the academic goals defined in 1999. Could you clarify that point? Does the effort seem to have resulted in any improvement of performance on these Academy goals so far?
  2. Sean, you wrote about the initial bumps and starts, “Initial meetings by assessment staff with faculty and staff members evidenced some confusion about what role assessment would play in the tenure and promotion process, evaluations of teaching, and control over resources at department and program levels.”  Can you tell us more about this, and whether/how things were clarified?
  3. Sean, your paper describes an effort to engage a large fraction of the instructors of general education courses in thinking together about critical thinking and its assessment.  The paper mentions 90% of faculty respondents to a survey liked the engagement effort (what fraction of faculty responded to the survey?)  What do you see as the biggest success of this effort to engage faculty? The biggest frustration or disappointment?  Based on your experience, what would you advise other institutions to do differently than Binghamton did?
  4. For both authors, suppose all external pressures for assessment were now removed after 2007.  Would assessment continue to grow and develop at your institution?  Have there been enough individual and programmatic success stories that the move toward a culture of goals and evidence would continue?

The remaining questions are for all participants in this discussion:

  1. How engaged is your instructional staff in the improvement of assessment?  For example, what fraction of your instructors have attended more than one assessment training session or event over the years? What strategies seem most promising for engaging a larger fraction of your academic staff in improving assessment?
  2. What tools or techniques for assessing learning have been MOST easily and widely shared among instructors and departments? (I’m talking about tools for assessing the outcomes and the process of learning)  Rubric designs? Survey or feedback forms? Particular types of performance tests such as the Force Concept Inventory in physics?
  3. How adequate by itself is assessment data, by itself, for showing where and how to improve learning? For example, I’m guessing that disappointing data about student writing skill at graduation would not, by itself, provide much guidance about how to improve student writing.  From assessment and other sources, are your instructors getting the evidence they need in order to see HOW to improve learning for their students? (for some of my own thoughts on this issue, see the second reference below).

 

Related Resources from the Facilitator

“Creating a Culture of Evidence – Case Studies” http://www.tltgroup.org/Flashlight/Handbook/Instns_Data.htm

“What Outcomes Assessment Misses” (why outcomes data often provides inadequate guidance for how to outcomes, what to do about it)

http://www.tltgroup.org/programs/outcomes.html

Alison Muirhead
Posts: 14

28/05/2007 22:15  
Hi Steve,

Thanks for your review, and a set of really interesting questions to kick things off!

Can I ask delegates to start a new topic around each question they want to discuss? Steve has helpfully started us all off with an 'initial question' topic.

Thanks
Alison.
Terri Rees
Posts: 2

29/05/2007 15:28  
The biggest difficulty we seem to face is workload related and so staff develop pragmatic responses to assessment, which may not be the most beneficial to students
Stephen Ehrmann
Posts: 11

29/05/2007 15:49  
[quote]The biggest difficulty we seem to face is workload related and so staff develop pragmatic responses to assessment, which may not be the most beneficial to students[/quote]
I agree. However, at the institutions I've known, workload is a slippery concept. I've only seen one study of how much time instructors spend teaching a course, and it was a small pilot. But findings hinted that the time/course was more a matter of individual style or preference than it was of some global reality that it takes X hours per term to teach one student, or one course.
Derek Rowntree
Posts: 35

30/05/2007 23:52  
You say, Stephen, that: "...workload is a slippery concept. I've only seen one study of how much time instructors spend teaching a course, and it was a small pilot. But findings hinted that the time/course was more a matter of individual style or preference than it was of some global reality that it takes X hours per term to teach one student, or one course."

Of course, some teachers give more time to students than others (and always have done, even when they were scheduled for the same number of hours of f2f contact every day). But are there any course evaluations that indicate whether more time spent on students makes them more satisfied and productive learners? It may turn out that teachers whose "style or preference" is to invest less time than colleagues have not properly adjusted to students' new needs.

Stephen Ehrmann
Posts: 11

31/05/2007 12:34  
Derek, you asked about studies that seek to relate instructors' investment of time in teaching to student learning or satisfaction.

My knowledge of the literature is fragmentary, but I do recall one study which I summarized several years ago on: http://www.tltgroup.org/resources/Research_Ideas/Faculty_Competences.htm

The key paragraph for your purposes is the one in brackets. The instructors who were widely regarded by peers, administrators, and students as absolutely first rate teachers -- these 'star' instructors were quite experimental in their approaches, trying one thing after another until their students got excited about the material and learned it. In effect, they were using assessment continually, for the whole class and for individuals in the class, to adjust what they were doing In contrast, the control group (equally widely known but rarely rated as exemplary) believed that not all their students were capable of learning; it was the instructors' job to 'put it out there' and the students' job to get it.

To me the control group role sounds less time-consuming than the star group. But no measures were made of how much time was being spent.

I would not expect to find any systematic relationship between time spent and learning, any more than i would expect to find a relationship between pounds of paper used per student, or watts of electricity per student, and student learning. At the best, one could begin with two contrasting patterns of teaching/learning activity (e.g., the study I described above) and ask whether, on average, one was more time-consuming than the other. It is also important, however, to ask about the experience of the time. Which ways of investing time in teaching are more fulfilling for the instructors? which are more stressful? If I were studying instructor use of time, I'd be looking for ways to make the activities more engaging and fulfilling for top flight professionals. I wouldn't assume that the less time spent (or the more time spent) the better. Cost is only one criterion of several that ought to be applied to the reorganization of academic work.
You are not authorized to post a reply.  



ActiveForums 3.6