Conference time: -
REAP Conference Fora (in programme order)
Subject: Research on student learning outcomes assessment at the program level?

You are not authorized to post a reply.   
Author Messages  
Eric Soulsby
Posts: 2

29/05/2007 12:13  
I'm not sure if this question is appropriate for this session, but getting instructors involved in assessment-related activity is something we are trying to do and it has been problematic without providing some incentive in some manner. With a lack of funds to subsidize assessment efforts within individual departments, we are trying to persuade faculty that assessment "works" in terms of helping them improve their instructional delivery and in terms of improved student learning. Now, with that said, a question that arises is the following: Have there been any studies done which show that student learning outcomes assessment at the program level (rather than the unit, class, or course level) leads to improvement? I am aware of the recent study by Pascarella and Terezini commissioned by ABET, Inc. for Engineering education programs which does show that assessment, as part of the EC2000 criteria, does make a difference. Does anyone know of other studies? I find mostly case study anecdotes which are tough to use to persuade instructional faculty of the merits of program level assessment. Thanks.
Stephen Ehrmann
Posts: 11

29/05/2007 14:07  
"Assessment" is a term with several correct definitions and, if we don't start by clarifying which definition is in use, we can quickly get into trouble (In other words, "assessment" is a confusor

I infer that, by "assessment," you mean some kind of estimate of the outcomes of student learning. An applause meter is one way to estimate the outcome a play. Does the actor's ability to hear applause lead to better theatre? To some degree, I would guess. In fact, it's hard to imagine that knowing that the audience likes the work (or is booing) would be useless information. On the other hand, most actors would like to have different kinds of information as well. I believe the same thing is true for learning. The assessment of outcomes does not, by itself, provide much guidance for [b]how [/b]to improve learning. So the relationship between assessment and subsequent improvement of outcomes has a lot to do with the creativity of the instructor, the availability of resources to support those improvement, etc.

What do other folks think about this question?
cynthia shedd
Posts: 5

29/05/2007 15:10  
The ABET study is interesting. I appreciate the challenges you have with your faculty. Assessment, particularly program review, can seem very far removed from their work in the classroom. However, if program review is considered organically, it is very closely related to the classroom.

If program review is externally imposed (e.g. we have to do this to get funding") departments will comply, and we all know how useful that is. A better approach is to build commitment to the process.

For the last three years I have been working with one department in doing an internally-driven program review. We began by re-drafting their mission statement and identifying their core values (Collins & Porras, 2002). These provided a framework for the program. We then mapped the nine core values (which we call learning themes) to the curriculum and corrected any gaps. We developed four outcome statements and are beginning the process of formally assessing these.

In the process of doing all this, the faculty became very energized. They are very excited about what they are doing and the impact it has had on their students. For instance, some of their students bring a copy of the learning themes brochure to job interviews and use it as a quick and graphic way to explain the skills and knowledge they have.

They have made a number of changes to individual courses so that they can better deliver the learning themes of the program. This has led them to partner with other departments (Office of Student Life and the Academic and Career Advising Center) to strengthen their delivery of two of their themes. Bottom line is that see program review as continuous--it's just how they do business.
Alison Muirhead
Posts: 14

29/05/2007 15:33  
Hi Cindy,

This sounds really interesting - can you expand on how you're planning to assess the four outcome statements? I assume the assessment of these will then feed iteratively into the program review?

Alison.
Peter Gray
Posts: 1

29/05/2007 15:47  
At the Naval Academy there are regular "full-scale" program reviews every five years with external reviewers. These are helpful in ensuring that our programs meet accepted discipline standards. However, the real assessment work (from clarifying goals and objects to gathering evidence of student learning, to making appropriate couse and curriculum changes) get done on an annual basis. It's this ongoing process of assessment that engages faculty because it concerns the courses they teach and addresses the questions they have about student learning.
cynthia shedd
Posts: 5

29/05/2007 15:51  
Hi Alison,

We are starting next week (in a post-quarter retreat) to look at the ethics outcome which relates directly to both the ethics and professionalism themes.

All students in the college are required to take a moral reasoning seminar. The department decided to build on that and worked with the Office of Student Life who is responsible for the moral reasoning seminars to learn what materials and concepts are presented to students. They now explicitly include those concepts and reference the seminar in talking about ethics in the major.

Assessment will be based on a paper majors write this quarter. The assignment is for students to write about a moral dilemma they have had (or witnessed). They have to describe the dilemma and what they did and then analyze it, describing stakeholders, various principles/approaches that could be used in the dilemma, possible courses of action and the impact/consequences of each one. They also describe what they would do now if faced with the same dilemma. The faculty have created a rubric for grading this and will use the same rubric for assessing how well students did with it for the program assessment.

And yes, they will use what they learn from this to improve how they deliver the outcome. But since we will also be learning about assessing, we expect to use that knowledge as we structure the assessment of the programming outcome (this is the computer science department) which we will be doing this fall.


We are trying to use embedded assessments whenever possible.

Are you using outcomes at your school?
Catherine Wehlburg
Posts: 2

29/05/2007 15:56  
Eric,

I think that this is a very important question. Assessment is often seen as an "add on" rather than an integral part to education. Part of this is because it has been mandated from "on high" rather than something that has is an essential part of teaching. When assessment is seen as a separate task, it usually is seen as an activity that should be rewarded or paid for by the institution. The difficulty is, of course, in making assessment seen as a part of (and inseparable) from teaching and learning. We are working on that here at Texas Christian University, but it is going to be a long process!

--Catherine Wehlburg
Eric Soulsby
Posts: 2

29/05/2007 18:23  
I appreciate the comments. When I used the term "assessment" I meant it in an overarching sense of encompassing defining goals/objectives/outcomes, measuring how well students meet the outcomes, evaluating the results of measurement, and feeding back what was learned as recommended actions for improvement. Hence, not just the act of assessing/measuring. (I realize the inconsistent terminology used in the assessment community often comes back to bite us.)

I, too, have had some faculty get energized when embarking on assessment and after getting started realize the usefulness of doing so. These are our "champions" helping to get others motivated. At the same time, there are some who don't believe it truly makes a difference beyond what they have already been doing for years. By this I mean that a well defined assessment planning process in the eyes of some provides little more than what anecdotal (or as a colleague term 'dialogical'-assessment around the water cooler) looks at curriculum delivery would reveal and therefore is not worth all the effort that is required. Hence, the question about whether "assessment" has been shown to work in some research effort besides the work that was done by ABET, Inc.

I realize the tie-in with program review, but we have often had such reviews focusing more so on research productivity of the faculty than on student learning -- it is tough raise the level of assessment in a research-intensive university in such reviews.

So, just curious if others have the same issue to grapple. Now, enough of my issue and let's see what insights the presenters have on the conference session. Thanks.
Catherine Wehlburg
Posts: 2

29/05/2007 23:00  
For us, program reviews have been focusing on summative types of informaiton. Certianly, they have used the assessment results, but they also focus on cost/benefit, student/teacher ratio, amount of space, etc. So, student learning assessment hasn't really made a huge impact in the areas of program review. They should -- but they haven't yet.
Maha Bali
Posts: 8

30/05/2007 01:14  
I really liked Stephen's analogy between assessment and the feedback an actors gets from his/her audience.

My question is: does assessment always point us towards what "needs to be done" and "how"? It might give us an idea which outcomes are not met, but by itself, assessment will not tell teachers exactly how to improve, right?

And I guess that is why involving the teachers in the assessment helps them understand (and perhaps also participate in) how the assessment addresses what actually goes on in the classroom.

How much of "big" assessments like accreditations can easily be translated into the classroom so that teachers can see the direct relation? Any thoughts?
Sue Saltmarsh
Posts: 3

31/05/2007 03:11  
Hi Stephen...I agree with your comment that the assessment of outcomes doesn't necessarily give much guidance for how learning might be improved, especially given that students often seem to be resistant to engaging in learning activities unless they can see a direct link between the activity and the assessment task...the interesting thing about the theatre analogy is that applause isn't generally heard until after the performance, so while audience response might be important to actors interested in knowing how their work is received by others, it doesn't necessarily influence performance per se--most actors I know seem to be primarily motivated by the intellectual and subjective challenges of interrogating texts and exploring relational possibilities through performance, with audience responses being an important, albeit secondary, factor...

...ideally, as educators, it would be nice to think that our approaches to pedagogy and assessment could provide the same kinds of intellectual and interpersonal motivations to students--those 'intrinsic rewards' of intellectual challenges and possibilities that enable learners to become critically engaged thinkers and innovators...what interests me, though, is how this is to be accomplished in the context of educational consumption--when education has become a 'product' that students consider themselves to have purchased, rather than a process in which they are engaging?...drawing on the theatre analogy, it seems that many of our students see themselves as having purchased the ticket that entitles them to sit in the audience and to receive satisfaction by watching the performance, than having stepped onto the stage, where learning involves not only remembering one's lines for the duration of the performance, but also interpreting, experimenting, interrogating, imagining, and becoming...

...so I suppose my question is, how might we make use of assessment to get our students up out of their seats and up onto the stage?
Stephen Ehrmann
Posts: 11

31/05/2007 12:55  
Several people have raised questions about the difficulty of working from summative data of what students have learned; it often provides little guidance about how to improve their learning in the future.

It's worth looking for exceptions to that generalization. The more fine grained the assessment, the better clues it provides about where work is needed (even if it doesn't provide direct clues about how to fix the problem, it at least narrows the focus). A "muddy points" course assessment technique, for example, can quickly draw attention to some particular idea or technique that isn't yet getting across; it's then up to the instructor to decide what to do about that problem, and sometimes the nature of the students' description of the problem can provide a clue.

It's usually better, I think, to gather evaluative information about the teaching/learning activities to complement assessment data: evidence that bears on the implicit or explicit theories that have guided the organization of instruction. Suppose, for example, that your approach to instruction would benefit if every student is participating and helping one another through online discussion and collaboration. But 1/3 of your students aren't actually participating effectively. Then evaluate the barriers that are hindering that participation.

Here's a second, and final, example of studying the process. This is an extract from the Flashlight Evaluation Handbook.

"As a young instructor, Jon Dorbolo of Oregon State University taught "Introduction to Philosophy" for the first time. Every time he assigned a reading or other work, he asked students to fill out a form that asked about how difficult the work was, and how interesting it was.

"The bad news: he discovered that almost every reading was judged by a majority of students to be boring.

"The good news: most readings were judged interesting by at least a few students, although it seemed that each reading was appreciated by different students. Dorbolo began to realize that students had different, sometimes unconscious assumptions about life that were influencing their reactions to the readings.

"That was the clue he needed. Dorbolo decided to redesign the course for the next term, allowing students to choose from seven different tracks of readings. One track, for example, was focused mainly on religious philosophy. A second track focused on utilitarianism and another on existentialism. Although each track was geared to a different set of student assumptions, all tracks were designed to teach similar skills of philosophical reasoning. Students spent some time in their own tracks, and some time discussing or debating issues with students in other tracks."

To sum up: a) use assessment evidence to see where students are learning well, and not so well, b) what's your 'theory' been in an area where students aren't learning well, c) study your theory-in-use: is this actually what's happening? if not, what's preventing it from working as you'd hoped. This approach can be applied within a course, within a department, or across a university.
You are not authorized to post a reply.  
Forums > Aligning assessment at institutional level Session > Aligning assessment at the institutional level > Research on student learning outcomes assessment at the program level?



ActiveForums 3.6