Assessment Tools for MOOCs

As MOOCs are made available for credit, credible--scalable--assessment options are essential. CT looks at the options.

Assessment Tools for MOOCs
Illustration by Jon Reinfurt

MOOCs may have started out as an anarchic educational free-for-all, but as more schools move toward offering credit for these vast online courses, the expectations around assessing student performance are growing as rapidly as MOOCs themselves--with lots of questions yet to be answered.

For example, the recent agreement between Udacity, the Georgia Institute of Technology, and AT&T to offer a "MOOMS"--a MOOC-delivered Master of Science in computer science--for a fraction of the cost of its face-to-face equivalent raises a host of thorny assessment issues. "A critical question is 'What is success?' and 'How will we measure it?'" the faculty working group asks in its analysis of the agreement, which was made public in May by the online publication Inside Higher Ed. The working group openly worries that "a potential threat is that our evaluation methods will not assess the real value of MOOMS and thereby lead to ineffective programs."

This story appears in the August 2013 digital edition of Campus Technology. Click here for a free subscription to the magazine.

But if institutions and MOOC providers want credit to be a viable option, then the question of assessment must be definitively settled. The American Council on Education (ACE) has moved the process one step forward by recommending that five courses offered by Coursera and four by Udacity be offered for credit. The recommendation comes with a requirement that a final exam be proctored and that the identity of the test taker be authenticated, whether in-person at a testing center or using a webcam proctoring service.

But exactly how do you set about testing and evaluating thousands of students? "We don't specify a method," says Cathy Sandeen, ACE's vice president for education attainment and innovation. "It's up to the [MOOC] platform to decide."

Most MOOCs offer automated grading tools for straightforward testing, such as multiple choice, true/false, and short problem sets. It's when assessments wade into more complex territory--such as student essays--that the grading solutions take on some controversy.

Peer Assessment
Coursera utilizes a much-discussed peer-grading system, in which students are organized anonymously into small groups--four people is the recommended configuration--to grade each other's submissions. The groups are double-blind and random: The system's anonymity and the sheer number of people in a typical MOOC make any attempt at collusion or grade inflation unlikely. A student cannot receive a grade on a paper without having graded the papers of those in the peer group, making it mandatory to participate in order to receive a grade.

Whether or not peer grading works is hotly debated. But if it's used, a good scoring rubric--the very specific standards that a student uses to assess another's work--is critical. It is also extremely difficult to create. Hank Lucas, a professor of information systems in the Robert H. Smith School of Business at the University of Maryland, taught his first MOOC--Surviving Disruptive Technologies--this spring through Coursera. Of the 16,000 students who enrolled from 100 countries, 600 to 700 completed both assessment exercises. For the first one, Lucas used a set of multiple-choice questions that were graded automatically by the Coursera system; for the final, he went with a peer-graded essay.

In designing the scoring rubric for peer grading, a staff member at Coursera lent time and expertise. Lucas notes that he himself spent significant time on it as well. "The grading rubric was harder than writing the exam questions," he says. "It took us two or three iterations before we were satisfied." Despite this, some students were still bewildered by the peer-grading instructions and process. Lucas' teaching assistant monitored the discussion boards and alerted him to confusion around grading; he then jumped in to answer questions. If he offers the course again, Lucas plans to create a video explaining how peer grading works, including the difference between a median and an average score, which created confusion.

Although some students complained about their scores on the peer-grading exercise, Lucas said he felt the process was fair overall. When he reviewed disputes, he says, he "tended to agree" with the original grade rather than with the student making the complaint.

Automated Essay Grading
The automated grading options that most MOOCs offer work best with math, computer programming, and some science courses. The assessment challenge is tougher for "soft" subjects such as literature, philosophy, and history, where professors often want students to demonstrate their grasp of the concepts in written essays.

In April, edX announced that it is taking on the challenge with the introduction of an automated essay-grading tool. The software, which is available for free, uses AI first to analyze a large batch of essays graded by a human teacher, then applies what it has "learned" as it grades student papers. The approach has plenty of vociferous critics, including university professors, but edX touts the fact that the tool provides instant feedback, allowing students to submit an essay numerous times and--ideally--improve their work with each version.

"We know that students really like immediate feedback, and more rather than less," says Vik Paruchuri, who serves as a machine learning engineer at edX. But he notes that it's simply too soon to say how well the automated essay-grading software is working: "We have a lot of data, but we're still working through [it]."

Smitha Radhakrishnan, an assistant professor of sociology at Wellesley College (MA), is preparing her first MOOC, titled Introduction to Global Sociology, for edX. She anticipates using the autograder for many of her assignments. "I give it a rubric and then grade 15 [assignments]," she explains. "It learns what I'm giving points for and what I'm not." She plans to replace the two- to three-page "think pieces" required in her on-campus course with short-answer questions that take advantage of the autograder. Her goal is "to train the people who take the MOOC to write really concise, awesomely conceived, evidence-based paragraphs."

Proctoring MOOC Exams
Whether they use peer review or autograding, most for-credit MOOCs are moving toward requiring at least one proctored exam during the course, with the student's identity verified with a picture ID.

Proctoring can be done either at a physical location or online. For MOOC-size courses, Pearson VUE is one of the few companies with the capacity to accommodate the large numbers involved. The company says it has more than 5,000 testing centers in 175 countries, and is the on-site proctoring company of choice for Udacity.

A host of other companies, such as Software Secure , ProctorU , and Kryterion , also offer online proctoring. These companies require that students hold up a picture ID on camera prior to beginning the exam; someone then remotely watches the student. Browsers and operating systems are typically locked down; some companies also monitor keystrokes to help verify identity, checking the speed and style of typing against previous samples from the same student.

Although nothing stops a MOOC instructor from requiring that all exams be proctored, it's a step typically reserved for the final exam, largely because of the cost to the student. Both Coursera and Udacity charge a fee for proctored exams. In January, Coursera announced a "signature track" for a few of its courses, where students pay between $30 and $100 for a "verifiable electronic certificate" confirming that the student sat proctored exams and his identity was checked. For its part, edX offers a "proctored certificate" for select sources. It costs around $90 and requires students to pass an exam under proctored conditions in addition to completing any other required coursework.

The first ACE-reviewed MOOC courses, which required a proctored final exam, were offered at the start of the year, so there's little or no data yet on how well the courses fared. "It's early in the game," Sandeen says. "We're still waiting to see how this plays out."

Featured

  • group of college students looking at large screen of data visualizations

    Scalable Cloud Strategies: Values for Higher Education

    From a massive, 23-campus cloud-and-security transformation, to a small college's "lift and shift" entry into the public cloud, Unisys Higher Education Strategist Christopher Wessells knows how higher education leverages the cloud. Here, he examines some of the values scalable cloud strategies offer our institutions.

  • heap of scattered, incomplete circuit boards, blueprints, and digital files

    Gartner Predicts Wave of Abandoned AI Projects

    Organizations that made significant early bets on AI may be in for a letdown, the research firm warns.

  • AI-inspired background pattern with geometric shapes and fine lines in muted blue and gray on a dark background

    IBM Releases Granite 3.0 Family of Advanced AI Models

    IBM has introduced its most advanced family of AI models to date, Granite 3.0, at its annual TechXchange event. The new models were developed to provide a combination of performance, flexibility, and autonomy that outperforms or matches similarly sized models from leading providers on a range of benchmarks.

  • Abstract geometric shapes, including squares and rectangles, are arranged in a grid-like pattern with connecting lines

    Eclipse Foundation Establishes New Open Source Compliance Initiative

    The Eclipse Foundation has launched the Open Regulatory Compliance Working Group (ORC WG), dedicated to helping the global open source community navigate increasingly complex regulatory landscapes.