Online Assessment | Feature
Keeping an Eye on Cheaters
When it comes to secure testing online, even high-tech solutions rely on an old standby: a human proctor. But is such an approach sustainable in the long run?
- By Linda L. Briggs
Illustration by Rafael Ricoy
A student labors over a midterm exam while a vigilant proctor peers over his shoulder, watching for any sign of cheating. It sounds like a tableau from a century ago, but the fact is that human proctors are alive and well, despite meteoric growth in institutions' online offerings. Even the most high-tech online testing solutions often rely on a person watching the exam-taking, whether that proctor is on-site at a testing center or monitoring the test via webcam.
Yet, with more than 6.7 million students now taking at least one course online, is this approach sustainable and--equally important--is it effective in curbing cheating? CT asked three online learning experts for their perspectives.
The Human Touch
Human proctoring "has been around for over 100 years" and works very well, says Peg Wherry, director of online and distance learning at Montana State University. There's already a well-established system for exam proctoring, created by schools that were offering distance learning--mail-correspondence courses, essentially--long before computers arrived on the scene. This proctoring network is generally available to other colleges and universities.
Although Wherry has years of experience with online learning programs at multiple institutions, MSU's distance-learning program is just beyond its pilot phase; online courses for undergraduates were offered to a small group for the first time in fall 2012. At testing time, online learners select human proctors from a list furnished by the university, or offer up their own proctor for approval. (During the fall pilot, remote students within a certain radius of MSU had the option of coming to the school's on-campus testing center, although this facility is used almost exclusively by on-campus students.)
The testing process followed by MSU is fairly typical of nationwide practices. If students want to suggest their own proctor instead of selecting one from a university-supplied list, a number of restrictions apply--for example, no family members, friends, supervisors or coworkers. In general, MSU stipulates that the proctor have a professional role in a relevant learning organization. The proctor, who is not paid, is e-mailed instructions on how to conduct the exam and a list of what materials--if any--are allowed in the exam room. Proctors do not need knowledge of the test subject matter. As for appropriate testing locations, MSU suggestions include public libraries and universities (generally free), and Sylvan Learning Centers, which charge a fee but have more than 1,000 locations nationwide.
New Technology Options
It's a similar story at Penn State University's World Campus, a large online program that serves up some 900 courses a semester. Online learners can select their exam proctors from an extensive Penn State human proctoring network. But nowadays they can also opt for a new technology-based service: Students can sit the exam anywhere with an internet connection, using a webcam and software that authenticates the identity of the test-taker by monitoring keystrokes.
The solution, from Kryterion, is relatively new at World Campus, says Wayne Smutz, associate VP for academic outreach and the executive director of the online program. He estimates that fewer than 10 percent of World Campus students use the technology, though he expects that number to grow.
As with other technology-based solutions, including ProctorU, the Kryterion system also employs a human proctor watching from a remote location. According to the company's website, proctors are trained to watch for unusual behavior, including eye movements, unusual noises, or moving about, and they can stop the test, send a warning, or take other action at the discretion of the institution.
Ideally, online learners should be offered a selection of testing authentication methods, including both human and computer proctoring, asserts Deb Gearhart, vice provost for e-learning and strategic partnerships at Ohio University. Gearhart recently moved to OU from Troy University (AL), where she helped develop its eTroy online learning program. Troy has some 30,000 students, about half of whom take some or all of their courses online.
In addition to the usual range of choices for proctored exams, eTroy students can select remote proctoring options including ProctorU or Software Secure's Securexam Remote Proctor, both of which use cameras and off-site human proctors. In addition, eTroy students log in for exams using LogMeIn Rescue, a software program that locks down their browsers. They also must answer several personal questions, such as a previous address or employer, posed by Acxiom Software.
Troy charges students a set fee of $25 for using a live proctor at a Troy-approved location. If a remote proctoring option is chosen, there is no charge, unless the student's computer lacks a webcam. In that case, the student must purchase an external camera through Troy's online bookstore for about $65.
ProctorU has proved to be the most popular solution among Troy students, Gearhart says, partly because the technology works well, and partly because an exam can be scheduled at virtually any time. This is particularly useful for service members in remote locations outside the US (eTroy caters heavily to students serving in the military).
Gearhart now plans to grow the e-learning program at Ohio University, where online learning for undergraduates is less than five years old. There are presently about 36,000 students at OU, 4,000-plus of whom are fully online learners. Many undergraduates choose human proctors, Gearhart says, but the school may add electronic choices as the program grows. For now, a small testing center is available on campus for students attending class in person and for those online students who live nearby.
Cheating: A Double Standard
Ultimately, the purpose of all the proctors, webcams, keystroke monitors, and browser locks is simple: to eliminate cheating. Yet those who are deeply involved in distance learning say cheating is no more prevalent online than in face-to-face classrooms. Studies back them up, claims MSU's Wherry. For example, a 2006 study published in College Student Journal found that about 3 percent of online students admitted to cheating, with a similar percentage on the face-to-face side. However, other studies have documented online cheating rates both higher and lower than in traditional courses. Nevertheless, Wherry finds the whole online cheating debate "annoying" because the inference is that the problem is unique to online learning, and generates undeserved criticism of online testing methods.
Gearhart, too, sees a double standard in how online exams are viewed. "A lot of people say there's more cheating online than face-to-face," she says, "but that's only because they tend to turn a blind eye to face-to-face cheating."
In her view, even government regulations hold distance-learning programs to a higher standard. The Higher Education Opportunity Act of 2008, for example, requires accrediting agencies to assure that distance-learning programs can verify student identities. Face-to-face classes are under no such obligation. "Online learning seems to have to jump through more hoops in general than face-to-face," notes Gearhart.
Despite this, says Gearhart, online programs can--and should--do a better job of designing exams to prevent cheating: "Too much of online learning echoes paper testing." Techniques such as competency testing, which assesses established standards of skills, knowledge, and abilities, can be much broader--and much more difficult to fake, copy, or otherwise scam--than exams that merely test what has been learned in one portion of one course.
Designing Cheat-Resistant Exams
When it comes to designing effective online exams, it's all about asking the right questions. Peg Wherry, director of online and distance learning at Montana State University, favors exams that test the application of knowledge rather than just recall of facts--an approach that is not only a better measure of students' understanding of course material, but also makes it harder for them to cheat. For building exams, MSU uses Respondus, a tool for creating and managing exams, along with the university's learning management system, Desire2Learn. Instructors use both software packages to create question banks and the exams themselves.
To keep online exams fair, Deb Gearhart, vice provost for e-learning and strategic partnerships at Ohio University, advocates using a pool of randomly selected questions to create a different exam for each student. These personalized exams are particularly effective when the tests must be offered over several days. Of course, this method requires more time from instructors, who must create many iterations of questions of escalating difficulty for the testing bank.
For courses such as math, technology can be used to track students as they work through a sequence of questions, Gearhart adds, monitoring gradual progress over time and making cheating difficult. This adaptive method has been used for years by software such as ALEKS, an internet-based assessment and tutoring program.
At Penn State, Smutz is comfortable with the integrity of World Campus's online testing system. While there have been incidents of online cheating during exams--some of them blatant, he admits--they have been handled using the system in place. Like Gearhart, Smutz advocates assessment strategies that go beyond proctored exams. With few exceptions, World Campus courses are relatively small (25 to 35 students) and emphasize instructor interaction.
"Our philosophy is high interaction, group projects, and collaborative work," he says. Not only does this approach raise the quality of instruction, but it also lowers the stakes on any particular exam. Plus, he notes, cheating becomes more difficult when there is significant student-instructor interaction. Indeed, some courses eliminate proctored exams altogether--instead, faculty base grades on projects, non-proctored quizzes, and other measures.
An Issue of Scale
As the massive open online course phenomenon pushes center stage, it is raising a number of interesting issues about online learning and assessment. Among them, of course, is how huge numbers of students can be tested fairly and efficiently--especially as MOOC organizations begin to offer for-credit options.
"We can learn from them," says Smutz, who is watching MOOCs with great interest. "Over time, I believe one of our big challenges is to continue to scale courses while maintaining highly effective classes and learning programs." Watching how--or if--MOOCs manage to achieve this level of excellence at scale will be highly instructive.
However, Smutz doesn't see existing methods for online testing scaling to the MOOC level. "I think [MOOCs] are going to have to go in a different direction," he notes. While that direction remains unclear, Smutz speculates that it might include new technologies such as machine-read essays, in which software can assess--and presumably correct and grade--narrative written exams.
In addition, Smutz believes learning analytics are a promising way to help identify areas early on where an individual student needs work. After all, why wait until exam time to discover problems?