Open Menu Close Menu

Course Evaluation | Feature

Grading Online Evaluations

Schools see valuable opportunities in moving course evaluations online, but only if they can raise student-participation levels.

In the absence of internal processes for evaluating instructors' teaching abilities, most colleges and universities put the responsibility on students. But is this fair to faculty? After all, a whiff of conflict of interest hangs over the whole proceeding. Students might grade a professor poorly as payback for a bad grade, for example. Conversely, students might give great reviews to instructors who dole out A's like Halloween candy. Or they might not even bother to respond. Now, with more and more institutions moving their course evaluations online, the question is whether technology will compound these concerns or resolve them.

Early research suggests that faculty may actually benefit from the move online. Jessica Wode, an academic research analyst with the Office of Evaluation and Assessment at Columbia College Chicago (IL), performed a review of the academic literature on online course-evaluation assessments last spring. Her conclusion: Worries that students with grudges are the most likely to fill out online forms are unfounded. "You actually find the opposite," explains Wode. "Either there is no effect or the students who did poorly in the class probably aren't even going to bother evaluating the course."

Indeed, there are indications that online evaluation systems may actually suppress participation among poor performers. In her unpublished dissertation at James Madison University (VA) in 2009, researcher Cassandra Jones found that class performance played a role in determining which students filled out an online evaluation: Students who received higher grades in a class were more likely to fill out a survey. As a result, noted Jones in her paper, "course-evaluation ratings could be artificially inflated because students with lower grades are not participating in the online course-evaluation process."

It would not be difficult to find a host of faculty members who would disagree strongly with these findings. And there is some question about the reliability of statistical analysis of online evaluations, given the low participation rates for many online systems.

Indeed, anemic participation levels may be the single biggest issue facing online evaluations. At schools that simply ask their students to fill out online class evaluations, a typical response rate is around 50 percent, according to "Response Rates in Online Teaching Evaluation Systems," a 2009 report by James Kulik of the Office of Evaluations and Examinations at the University of Michigan. In contrast, the typical response rate for paper-based evaluations is around 66 percent, and often much higher.

It's not difficult to figure out why online response rates are lower: Most faculty have students fill out paper surveys in class, whereas online evaluations are usually completed on the students' time, making it easy for students to forget.

The fact that students need to make a concerted effort to fill out the online forms makes some faculty--especially those whose salaries or employment are tied to the results--very nervous. Regardless of what the latest research suggests, many instructors remain convinced that online evaluations tend to be filled out by outliers--those who want to butter them up or demean them. The majority of students who fall in the middle, they feel, are not well represented.

Academic forums online are filled with posts expressing such sentiments. "Our school went all online--no choice--maybe three years ago, and response rates dropped to almost nothing," wrote one faculty member in December. "People's entire careers are now resting on making sure they don't anger students enough to get them to actually log in and say what they think."

It is this very concern that convinced Texas Tech to keep paper-based evaluations for the majority of its courses: In Texas, the state government has mandated that student evaluations be used in determining merit pay for faculty. "As long as there is that significant policy issue--and salary issue--we have made a decision not to change the methodology for student and course evaluation," explains Valerie Paton, vice provost of planning and assessment. Instead, online course evaluations are used only for online or hybrid courses.

Increasingly, schools are experimenting with a variety of strategies to resolve the problem of poor participation. When Harvard Divinity School (MA) experienced response rates as low as 20 percent after implementing the CoursEval system in 2010, for example, staff tried various techniques--teasing, cajoling, even trivia questions--but they couldn't lift the rate higher than 60 percent. Then HDS resorted to out-and-out bribery: "The students who completed their course evaluations by a specific deadline were granted early access to view their grades," says registrar Maggie Welsh. Participation shot up to 90 percent.

Variations on this same strategy appear to be reaping benefits at institutions nationwide. Some schools give participating students access to their grades one or two days early; others make it as much as a week.

Mining the Data
If schools can raise participation to levels on par with paper-based evaluations, the benefits of the online system start to become apparent. For starters, the quality of the feedback seems to be higher. According to Welsh, the comments are much more detailed. She and Wode both attribute this, in part, to the anonymity of online evaluations. Prior to using CoursEval, HDS kept the raw forms in a folder that the instructor could view. Any student who suspected that the instructor might recognize his handwriting may have been less likely to tell the unvarnished truth. It's also likely that students take more time to fill out the forms online than when they're rushing to fill out a paper survey at the end of a class.

The other great potential of online evaluations lies in business intelligence: An online system's capacity for tabulating the results has proven a boon to administrators, not only in evaluating instructors but in helping to reshape course content. "Even folks who are not technologically inclined are able to see snapshots and very easily understand them," Welsh says. "It's providing real qualitative and quantitative data to senior administration in a way that we were just never able to do with paper."

It is in this area of data mining and analysis that online evaluation systems may offer the best opportunity to correct for survey bias. Take, for example, the issue of tough instructors getting poor evaluations from students looking for an easy A. Shane Sanders, an assistant professor of economics at Western Illinois University, believes that instructor scores can be corrected by introducing other relevant data points. Citing the work of economist Chad Turner of Texas A&M, Sanders believes a more valuable indicator would be "an instructor's student-evaluation quality score adjusted for the same instructor's student-evaluation difficulty score."

Corrected for bias, student evaluations can be powerful--and reliable--indicators of the efficiency of a course and its instructor. "You're getting not just one data point of one peer reviewer coming in for one class," explains Wode. "You're getting maybe 30 data points from students who've been there for 15 weeks."

While administrators and faculty have obvious uses for the evaluation forms, more and more schools are giving students access to the results, too. HDS is working on a way to allow students to view the tabulated results, as well as comments relating to the general value of a course. "The consensus we've reached is that the primary purpose of these course evaluations being viewable would be to assist people in choosing classes," says Welsh.

comments powered by Disqus