A New Methodology for Evaluation: The Pedagogical Rating of Online Courses
In articles appearing in our November and December issues, Nishikant Sonwalkar
examined the elements of online learning within the structure of a “learning
cube.” Here, he proposes an instrument for evaluating online courses based
on those elements.
Online course offerings are increasing in number every day. Most universities
and corporate training facilities now offer some or all of their courses online.
In fact, more than 1,000 corporate universities and online providers offer courses
in everything from information technology to Chinese cooking. Although it is
clearly advantageous for asynchronous learners to access educational information
and content anywhere and anytime, it is difficult to evaluate the quality and
effectiveness of online courses and learning modules.
Growing Need for Evaluation
Open source learning platforms and public access to online course content are
increasingly popular because higher education can benefit from joint development
efforts and shared resources, which ultimately reduce the cost of online learning.
Consortia are sharing volumes of information and courseware, and several vendors
are providing their technology as open source materials.
In the open source, open content environment we are entering, it is important
to develop a common, objective scale and summative instrument with which to
measure the pedagogical effectiveness of online course offerings.
Models of Evaluation
In my two previous articles in Syllabus (November and December 2001), I described
the pedagogical learning cube (see Figure 1) in the context of instructional
design. In this article, I’ll again invoke the cube, including the five
functional learning styles—apprenticeship, incidental, inductive, deductive,
and discovery (x-axis); the six media elements—text, graphics, audio, video,
animation, and simulation (y-axis); and the third axis of the cube (the z-axis),
which represents the interactive aspects of learning.
Figure 1: The learning cube
Learning styles: L1 = apprenticeship; L2 = incidental; L3 = inductive; L4 =
deductive; L5 = discovery
The z-axis indicates the degree to which students are engaged with the learning
content, moving from a teacher-centric to a student-centered approach. This
interactivity axis (z-direction) of the cube may be defined in terms of five
elements: system feedback, adaptive remediation and revision, e-mail exchange,
discussion groups, and bulletin boards. With this definition of the learning
cube, a framework can be constructed to define pedagogy as a 3D space.
Pedagogical effectiveness is at the heart of online offerings and defines critical
parameters for the evaluation of courses. However, learning management systems
provide the essential integrative layer for online courses. If online courses
are delivered in the context of learning management systems, several additional
factors must be considered in any evaluation.
I propose a new instrument for overall evaluation, based on a five-factor summative
rating system plus a pedagogy effectiveness index (PEI). The intent of the methodology
described here is to create objective criteria for evaluating the quality of
online courses based on the existing elements that represent pedagogical content.
The Pedagogy Effectiveness Index
Expanding on the above arguments, the pedagogical effectiveness of an online
course can be defined as a summation of learning styles, media elements, and
interactivity.
Assuming that each of those factors are equally likely and mutually exclusive,
a probability distribution tree diagram (see Figure 2, page 19) can be shown
to have three branches, with sub-branches represented for each axis of the pedagogical
learning cube. A PEI can therefore be determined by a summative rule (see Figure
3, page 19). The corresponding probability multipliers can be shown in a simple
matrix (see Figure 4).
Figure 2: The probability tree diagram for the pedagogical learning cube
Figure 3: The pedagogy effectiveness index expressed as a summative rule
Figure 4: Simple probability distribution matrix
Style |
Pi |
Media |
Pj |
Interaction |
Pk |
Apprenticeship |
0.068 |
Text |
0.055 |
Feedback |
0.066 |
Incidental |
0.068 |
Graphics |
0.055 |
Revision |
0.066 |
Inductive |
0.068 |
Audio |
0.055 |
E-mail |
0.066 |
Deductive |
0.068 |
Video |
0.055 |
Discussion |
0.066 |
Discovery |
0.068 |
Animation |
0.055 |
Bulletin |
0.066 |
|
|
Simulation |
0.055 |
|
|
Total (weighted) |
0.34 |
|
0.33 |
|
0.33 |
Consider the following cases as examples of the application of the PEI.
Case 1: The PEI for a course with one learning style, one media element, and
one interactive element will be:
PEI = 0.068 + 0.055 + 0.066 = 0.189
Case 2: The PEI for a course with three learning styles, four media elements,
and two interactive elements will be:
PEI = 3*0.068 + 4*0.055 +2*0.066 = 0.556
Case 3: The PEI for a course with five learning styles, six media elements,
and five interactive elements will be:
PEI = 5*0.068 + 6*0.055 +5*0.066 = 1.0
These cases clearly illustrate that the PEI varies from 0 to 1. The probability
of the pedagogical effectiveness increases as cognitive opportunity increases
with the inclusion of learning styles, media elements, and interaction. The
PEI is based on a simple probability distribution and should be considered an
approximate indicator within the bounds of assumptions listed above, specifically
relating to the flexible learning approach depicted by the pedagogical learning
cube.
Summative Rating for Online Courses
The PEI serves as an indicator of the pedagogical richness of a course. However,
online course delivery systems include several additional factors that affect
the measure of success. Objective criteria for a summative evaluation should
be applied in five major areas, including (1) content factors, (2) learning
factors, (3) delivery support factors, (4) usability factors, and (5) technological
factors.
These factors are evaluated with reference to the learning technology standards
proposed by IMS, AICC, and SCORM.
Content factors. The content is the basis for course delivery and must be good
to begin with. Mediocre content cannot be improved simply by infusing it with
pedagogical styles or multimedia tools. It is important that an independent
authority authenticates the accuracy and quality of the content. The source
and author of the content must be given proper attribution to avoid copyright
and compensation issues and to hold the author responsible for the content’s
quality.
Learning factors. The effectiveness of an online course depends on the quality
of pedagogically driven instructional design. The learning factors at the core
of the educational quality of an online course include concept identification,
pedagogical styles, media enhancements, interactivity with the educational content,
testing and feedback, and collaboration. Often, the objectives of a course are
not well-defined. It is important that the instructional design is sensitive
to the functional learning style that accommodates individual content sequencing
and aggregation preferences.
Delivery support factors. The success of an online course depends heavily on
the delivery support function essential for course instructors, administrators,
and users. A software module should manage user authentication, portfolio information,
and records of users’ activities throughout the course, as well as course
content elements—including video streaming servers, audio servers, and
the HTML server. Also, the federal government now requires colleges and universities
to make access to online course content available to students with vision and
hearing impairments.
Usability factors. Even if the quality of the content, pedagogical styles,
and multimedia tools is high, an online course can be a complete failure if
usability is poor. Users interact with online Web courses through a graphical
user interface, so the design of graphic elements, the color scheme, the type
fonts, and navigational elements can all affect how a course is organized and
perceived by students.
Web pages that are loaded with information require excessive scrolling within
a window and can be detrimental to the educational quality of the presentation.
Design experts recommend presenting small chunks of information in 800x600-pixel
windows. Page layout and ease of access from other parts of the course site
are crucial to the success of an online course.
Technological factors. The issues that influence the technological success
of online courses include available bandwidth, target system configuration,
server capacity, browser software, and database connectivity. The network bandwidth
defines what is the lowest common denominator for the course Web page. Designing
for 56 kilobits/sec modem access has more limitations than designing for a T1
network connection of 1 megabit/sec. The number of simultaneous users a Web
server can handle is also an important constraint for the large-scale deployment
of online courses.
Designing courses to run via Microsoft Corp.’s Internet Explorer vs. Netscape
Communications Corp.’s Navigator can make a difference in the kind of HTML
4.0 vs. JavaScript features that can be included. The choice also has an impact
on the plug-ins that may be required to run interactive applications. Most large-scale
online courses are powered by a database back end. The database connectivity
and connection pooling mechanism can become a bottleneck if not dealt with properly.
Most rating systems are summative and depend on a precise definition of the
quantitative scale. The most widely used rating system is the Likert scale,
which I have selected for the proposed summative evaluation instrument (see
Figure 5)
Figure 5: Summative evaluation instrument for rating online courses
No. |
Evaluation Factors
|
Absent
|
Poor
|
Average
|
Good
|
Excellent
|
1 |
Content Factors |
0
|
1
|
2
|
3
|
4
|
|
Quality |
|
|
|
|
|
|
Authenticity |
|
|
|
|
|
|
Validity |
|
|
|
|
|
|
Media
|
|
|
|
|
|
|
Presentation
|
|
|
|
|
|
|
Attribution |
|
|
|
|
|
2 |
Learning Factors |
0
|
1
|
2
|
3
|
4
|
|
Concept Identification
|
|
|
|
|
|
|
Pedagogical Styles
|
|
|
|
|
|
|
Media Enhancements |
|
|
|
|
|
|
Interactivity |
|
|
|
|
|
|
Testing and Feedback
|
|
|
|
|
|
|
Collaboration |
|
|
|
|
|
3 |
Delivery Support |
0
|
1
|
2
|
3
|
4
|
|
Factors |
|
|
|
|
|
|
User Management |
|
|
|
|
|
|
Course Content |
|
|
|
|
|
|
Accessibility |
|
|
|
|
|
|
Reporting |
|
|
|
|
|
4 |
Usability Factors |
0
|
1
|
2
|
3
|
4
|
|
Graphical User Interface |
|
|
|
|
|
|
Interactive Design |
|
|
|
|
|
|
Clarity |
|
|
|
|
|
|
Chunk Size |
|
|
|
|
|
|
Page Layout |
|
|
|
|
|
5 |
Technological Factors |
0
|
1
|
2
|
3
|
4
|
|
Network Bandwidth |
|
|
|
|
|
|
Target System Configuration |
|
|
|
|
|
|
Server Capacity |
|
|
|
|
|
|
Browser Software |
|
|
|
|
|
|
Database Connectivity |
|
|
|
|
|
An Overall Rating
The summative evaluation results (the sum of the ratings of all the factors
in each of the five categories) and the PEI can be combined to give a final
result that provides a view of the overall effectiveness of the online course:
Overall Rating = PEI x Summative Rating Score
The advantage of using the overall rating formula lies in the ability to incorporate
the scores of both the pedagogical and delivery systems to provide a final rating
that will be useful for comparing online course offerings.
The pedagogy effectiveness index and the summative evaluation instrument used
in combination can be powerful tools for evaluating large numbers of online
offerings. These criteria have a clear emphasis on pedagogically driven design.
Widespread use of these tools could guide and motivate online education developers,
universities, and training centers toward the creation of educational systems
marked by measurable success.