Uncharted Territory
Are you choosing the right online assessment
products and getting the most out of the tools you
have? Online assessment is fraught with pitfalls, but
these savvy educators and technologists are meeting
the challenge-and then some.
There's certainly no shortage of online learning
platforms out there today. Blackboard,
Desire2Learn, Sakai, Moodle, Angel Learning, and Datatel (projected to be a CMS player in Q4
2008): You name the interface, and chances are that someone
at your school has evaluated it at some point in the not-too-distant past.
But investigating the value of the assessment components of these tools, now
that's another story altogether. This exploration-essentially, the process of
assessing online assessment-can be far more complicated. And while many
higher education administrators trust their CMS vendors implicitly, a growing
number are engaging in their own forms of metrics to gauge how well students
are doing when they're educated or accessing education content online.
Some officials see this process as a critical part of online learning systems.
Others see it as an act of calculating return on investment (ROI)-a way to
see precisely how much bang they are getting for their buck. Ron Legon,
executive director of Quality Matters, a program
designed to certify the quality of online courses and online components,
says that no matter how educators assess their online assessment tools, it's a critical part of performance evaluation overall (see "Setting the Standards"). (Quality Matters is
run by MarylandOnline, a consortium
that champions distance learning in
Maryland and serves as a directory for
Maryland schools involved in the online
learning experience.)
"To offer online learning is one
thing," says Legon. "To actively evaluate
it to make sure it's doing its job, is
something entirely different."
Selecting Rubrics
and Metrics
Inherently, assessment tools or rubrics
are nothing without metrics. In traditional
classroom settings, most of these
metrics take the form of test scores,
compiled after a particular lesson (in
the case of formative assessments) or a
particular sequence of the curriculum
(in the case of summative assessments).
Other assessments consist of grades or
rankings for things such as participation,
homework, and attendance. This is
nothing new.
Many of these same metrics exist in
the world of online tools, as well-the
media by which they are applied are just
different. Instead of distributing a paper
exam, for instance, a professor may have
students respond to multiple-choice
questions via a web browser. Instead of
having students meet at the library for
group homework assignments, a professor
may require them to meet in an online
collaboration environment.
Setting the Standards
IF ANYONE KNOWS how to assess the value of online assessment tools, it's the folks at
the Quality Matters program, an assessment-oriented effort from MarylandOnline. Over the last few years, under the leadership of Executive Director
Ron Legon, the Quality Matters group has identified 40 specific (and proprietary) standards
under eight general categories, to evaluate the way an online course is structured.
These standards have been incorporated into a rubric and weighted from 1
(important) to 3 (essential). Currently, five of the 40 standards on the rubric relate
specifically to assessment. They are:
- The types of assessments selected measure the stated learning objectives and
are consistent with course activities and resources.
- The course grading policy is stated clearly.
- Specific and descriptive criteria are provided for the evaluation of students'
work and participation.
- The assessment instruments selected are sequenced, varied, and appropriate to the content being assessed.
- 'Self-check' or practice types of assignments are provided for timely student
feedback.
Legon points to learner engagement as a major assessment criterion. He insists that
online learning should not be a passive experience for any student, and emphasizes
the need for educators to implement courses that inspire students to get involved. He
notes that getting students successfully launched in the course also is important, since
most dropouts occur in the first two weeks. "The great thing about online courses is that
there's a full record of everything that's captured, and it can be looked at by outside
third parties," he says. "While teachers might not like this when they falter, it's a great
way for us to go back into a classroom experience and learn from it."
Currently, Quality Matters is working with several hundred institutions around the
country, to help shape their online learning platforms and associated assessments.
For more information about the program, or to access its rubrics and standards, visit
here.
Most professors apply metrics
through predetermined assessment
rubrics. At Rio Salado College (AZ),
however, many of the rubrics are fun:
multiple-choice practice quizzes turned
into the form of online games with a little
help from Quia web-based software. Jennifer Freed, Rio
Salado faculty chair of instructional
design, says the playful interface gives
students a chance to learn comfortably.
"The games are fun and they provide
instant feedback," says Freed, who
notes these formative assessments are
interspersed with more "serious" webbased
summative assessments once or
twice throughout the semester. "I can't
think of a better way for students to
process new material."
At Newbury College (MA), "metrics"
are much more conceptual. Yes,
educators assign scores to certain tests
and assignments, but at least in certain
psychology classes, Professor Charlie
Virga is more interested in seeing that
his first- and second-year undergraduate
students can demonstrate the "construction
of knowledge" from the beginning of
a semester, to the end of it.
For Virga, this means careful scrutiny
of online discussion posts. With the
help of his school's Blackboard system,
he archives every post and grades them
periodically throughout the semester.
Relevant posts that link to course material
and provide elaboration or additional
information receive the highest
marks. Irrelevant posts, and posts that
have no link to course material or personal
experience, receive no score.
"In my book, it's all about critical
thinking," he says of his rudimentary
rubrics. "I don't have access to [my students']
thought processes online, but by
looking at the discussion posts, I can
try to identify the turning point where
they started to see something that they
couldn't see before."
Keeping Tabs
With a course management system such
as Newbury's, archiving data on performance
is a cinch. Such is the case with
many other CMS platforms and online
assessment tools, too. Collecting data
on student performance in the virtual
environment, however, is only half of
the assessment effort; once professors
have the data, the next key step becomes
figuring out how to make sense of it.
One way to keep tabs on the degree to
which students are interacting with
online assessment technologies (and with
peers via the tools) is to apply business
intelligence. With the help of a virtual
learning environment from L Point Solutions
called Inetoo,
professors can encourage student collaboration
and communication online, and
later log in to analyze how students interact
with content and with each other.
For More Information
Don't miss these resources, generally hailed as 'must-reads.'
- Understanding by Design, by Grant P.Wiggins and Jay McTighe (Association for
Supervision and Curriculum Development, 2005).
- Assessing Online Learning, edited by Patricia Comeaux (Anker Publishing, 2005).
- Web-Based Learning: Theory, Research and Practice, edited by Harold F. O'Neil
and Ray S. Perez (Lawrence Erlbaum Associates, 2006).
- And head here for up-to-the-minute information on
assessment and online assessment tools. Visit our magazine archives, and
subscribe to our Web 2.0, SmartClassroom, and IT Trends eNewsletters).
This service, dubbed "performance
intelligence," is something that founders
Robert Brouwer and Ahmed Abdulwahab
say is a higher education spin on the
kind of business intelligence used by
companies in industrial and manufacturing
sectors. While this product is
brand-spanking-new, Paul Kim, a professor
at Stanford University (CA), is
wasting no time deploying it; he's planning
to pilot it in his Web-Based Technologies
in Teaching and Learning class
this spring.
"After the completion of this course,
students will be able to describe how
web-based communication, collaboration,
and visualization technologies
play a role in the behavioral, cognitive,
constructivist, and social dimensions
of learning," says Kim, who also
serves as CTO of the university's
School of Education.
Finally, at the Rose-Hulman Institute
of Technology (IN), educators
have turned to the Learning Management
Suite from Angel Learning to
map various content items (such as assessments, drop boxes, and discussion
forums) to institution-wide and coursespecific
objectives and to generate
reports based on student performance
related to all items associated with a
given standard or objective.
Claude Anderson, professor of computer
science and software engineering,
says the school recently has incorporated
Subversion for storing and communicating all of its
faculty-level course assessment documents,
and for charting version control.
"We used a wiki-based system for a
couple of years, but found it too cumbersome,"
says Anderson. He adds that
with Subversion, Rose-Hulman has
"greatly simplified the coordination
between various faculty members
teaching a course."
Dissuading Cheaters
In a brick-and-mortar classroom, it's easy
for teachers to catch students peering
down at a cheat-sheet or passing answers
to a pal. In a virtual classroom, however-
where in most cases educators have
never seen students face-to-face and have
no idea what kinds of technology setups
students have in their homes-sniffing
out cheaters is a much more difficult task.
This is a challenge Karen Swan knows
all too well. As research professor for
the Research Center for Educational
Technology at Kent State University (OH), Swan works regularly with professors
to devise ways to prevent cheating
in the online world. Yet, the harder
she tries, she admits, the harder she
finds the task. Her solution: keeping students
active with assessments before,
during, and after every class.
Extreme? Perhaps. But as Swan sees it
(after years of research), short of locking
students into a particular browser (which
still isn't foolproof if students have a second
computer at home), there is no way to
tell if online students are working together
behind the scenes. Rather than trying
to prevent this, she argues it's better to
throw multiple and repeated assessments
at students so-at least at some point-
they are forced to do their own work.
"The only feedback for whether or not
they're learning is the assignments they
do, and because you don't have people
nodding their heads in a classroom [as
you teach], those [assessments] should
be multiple," she says. As for assessing
the quality of the feedback, Swan concedes
it's not her priority. "As long as
I'm getting feedback, I'm happy."
Other educators agree. Virga, the
psychology professor at Newbury, says
that in most online classrooms, since
it's so difficult to catch cheaters in the
act, educators simply must assign
assessments and trust that students
won't cheat. He adds that by not having
a physical classroom to which students
must report, educators can get away
with requiring additional assignments,
thereby getting a better sense of who
and what each student is all about.
"In a face-to-face class, all you're actually getting is their papers," quips Virga.
"In an online class, it's paradoxical,
because even though they're not there,
you can demand and expect more."
Improving Assessment
Looking forward, perhaps the best way
to assess the performance of online
assessment tools over time is to
embrace evolution. The easiest way to
do this is simply to stay on top of recent
research into online assessments, a
chore that is perhaps best accomplished
by keeping abreast of the latest publications
that deal with the subject (see "For More Information").
On individual campuses, there are
other, more proactive options for implementing
the latest and greatest in online
assessments. Some educators, such as
those at Rose-Hulman, administer surveys
to all students who participate in
online learning, and go through survey
responses at the end of every semester
to see how they can improve the online
assessments and the web-based learning
experience overall.
Educators at Rio Salado are even more
meticulous: At the end of every school
year, Freed says instructors look back at
each individual assessment and compare
student performance on every question. If
a majority of students got a question
wrong, educators may go back and tweak
the wording or rewrite the question altogether.
If a majority of students got a
question right, educators might make the
query more challenging.
"More than anything, we want to make
sure that assessments align with what
we're teaching," she says, noting that the
process is indeed time-consuming, frequently
daunting, but still worth it
because of its impact on the quality of
the education delivered. "In the end,
the curriculum is more important than
[the work on] any assessment or online
interface."
::WEBEXTRAS ::
Electronic Student Assessment: The
Power of the Portfolio.
Case Study: Seton Hall (NJ) Embraces
Assessment with Technology.
Matt Villano, senior contributing editor
of this publication, also writes regularly
for The New York Times, San
Francisco Chronicle and Sunset. He is
based in Healdsburg, CA.