Open Menu Close Menu

Viewpoint

Why Do We Assess?

I've been involved in higher education, as a professor and as an academic computing administrator, since the 1960s. My previous best training in how people learn was when I worked in the U.S. Army Corps of Engineers' Instructional Methods Division at Fort Belvoir, VA: Teaching officer candidates who were going off to war how to effectively communicate with their troops took on a life-or-death urgency. I wanted to bolster the confidence of the young officers-to-be, but I also had to get them to the point where they communicated well--which made me aware of the most basic dilemma for assessment: "On the one hand, a good teacher is supposed to model the role of  'coach' by giving continuous, non-judgmental feedback to the learner. On the other (at least in this country), she or he also must periodically grade student performance for the record. The resulting role conflict has been blamed for everything from ritualized student behavior to grade inflation." (Peter Ewell, address at the 18th conference on assessment hosted by AAHE, see http://www.teaglefoundation.org).

My best training now is coming from our 2-year-old grandson. As he learns to talk, he attempts words that are beyond his lingual facility but not beyond his naming capacity: He says "moaner" for "mower," for example. He loves mowers and trimmers and "diggers" and all the machines he sees each day in the summer as we drive him around. We know what he means by "moaner" but we, ourselves, say "mower." Secretly, we hope he always says "moaner" because it is his word and because it's such a humorous image of lawn mowers actually moaning. We accept his word and encourage his talking about "moaners," never trying to correct but just accepting his contribution as part of the conversation. We know he will eventually say the word correctly.

We see our grandson once a week for the whole day. When we hear him use a possessive we glance at each other but say nothing until later. Learning language can't be self-conscious at that age but just naturally a part of conversation. We couldn't imagine the horror of having to tell him "You have a C+ in language development because you use your articles incorrectly." Yet all teachers have to face this dilemma with their students. How, as Peter Ewell pointed out, can you be both formative and summative?

But, it's even more complicated than that for teachers, as we all know who have taught over the years or over the decades. Take the three examples of students I had during one year of a first-year writing class:

1. The "Big words will make me look like a good writer and impress the teacher" student.

2. The "Everything is relative" student who could not take a stand so could not develop an argument.

3. The "I'm smart but won't talk in class" student.

The "big words" student took 5 months to convince that people write to communicate and not to impress. I rewarded him for not using big words. The "everything is relative" student had to develop an argument why she should not fail the course; she did that quite well and I rewarded her for having an opinion. The "won't talk in class" student learned to contribute during the group chat sessions (on computers) we had in class for brainstorming: Her classmates recognized that she, in writing, was a natural class leader, she gained confidence, began talking in class, and her classmates regarded her very differently as the semester progressed.

All three students changed in ways that were profound and important for them. But this brings up a very important question: What do we know about those changes and what did the students think about those changes? How could I make one summative statement, the grade, about these three students that captured or even hinted at the valuable life lessons they had learned? It is as if you had a 3-month trip around the world and all you were allowed to say about it was A, B, C, D, or F. We educators miss most of the good stuff, the real story of change in our students. We may see it at the moment, but we generally have no uniform system to capture the story or to then represent the story to others. Indeed, the students have no easy or generally accepted way to represent their story to employers or admissions offices.

Information technology, importantly, allows us to collect more evidence of the students' experience and to assess the evidence in more ways. Information technology allows us to link the evidence over time, build cohesion elements into the collection of evidence, and develop a much richer picture of a learner than by using the standard grading system.

Regarding the three students who I've described, what we really want to know is "What did those students think about their changes? What was their story? And how did they do in their other classes afterward? What difference in their lives did these lessons make?"

And I could only say, "Well, number 1 got a B, number 2 got a C, and number 3 got an A. After that semester, I don't know. Check their transcripts."

I have the summative assessment but have no representation of the formative assessment. At the individual student level, we miss the whole story of learning because we boil it all down to one letter. We don't capture the teacher's stories about the students and we don't capture the students' stories about themselves.

It does not make sense that we lose so much valuable knowledge when we no longer need to.

Keep in mind it is not primarily the actual papers these students wrote that is important to us as educators for getting a fuller picture of these students. Instead, we want to know how the students changed from their own perspective and how those changes affected their subsequent lives as learners. We want to know the meaning of these stories. The papers by themselves don't tell us that. But students' comments about writing the papers do tell us that.

How can we know much about education if, at the most basic level, we don't know what learning means to individual students?

Another Approach

What if, for example, I could have asked each of those three students to hand in two pieces of writing for each assignment: One is the paper itself (or lab report, or math problem, or architectural drawing or field experience report) and the other a short comment--say a paragraph or two--about the paper? In that paragraph or those paragraphs they would have responded to the following prompts: "What was it like writing this paper?" And "What do you think of this paper compared to others you've written?" And "What does the paper mean to you?"

I could have graded both the paper and their comments equally. This way, I would have known more about their thinking process--not exactly think-aloud protocols, but a summative "think aloud."

And, what if the students could have summed up the comments semantically in a few words at the end: What did writing the paper (doing the lab work, creating the drawing, etc.) mean to them? In this scenario, we still have the grades, but we now also have important elements of the students' stories.

Organizing the extra comments in paper would, of course, be complicated. But organizing it with an online portfolio, assuming the portfolio works easily and does not add to the complexity, can be straightforward.

Once the students' comments and semantic statements about the comments are in the portfolio, it can be easy to search and organize in different ways for different purposes. The student comments become like data which can respond to queries to create reports.

For Better Assessment, Start At The Most Basic Level

If our entire assessment, accountability, tracking student outcomes, etc. apparatus, or industry, started with students telling more of their story in a way that lends itself to building data structures, the entire system would produce much more valuable information for researchers, assessment experts, institutional research offices, and accrediting agencies. We would base the entire apparatus on meaning created by the students and not be trying desperately to find meaning after the fact as we are doing now.

No matter what superstructure you build, electronically or not, that is based on classroom procedures that miss the really important evidence, the structure will not provide the benefits promised.

Accrediting agencies have moved beyond being satisfied with the mere fact that an institution has an electronic portfolio. Keeping the same curriculum and the same teaching and learning methods, and then extrapolating from the curriculum to identify the implicit general learning goals, then building ex-post-facto rubrics based on those learning goals will produce temporary value for the faculty who re-examine their syllabi, but will probably have minimal effect for the students.

Instead, starting with the interaction between teacher and student to see what more we can keep of those crucial moments that will help the student grow over time and which will result in much richer learning data, can be truly transformative. Buying an ePorfolio system itself does nothing; Understanding what it frees you to do--starting from scratch about how we teach and learn--does everything. Start with changes in how we assess individual students and we'll get somewhere.

comments powered by Disqus