Shifts in Thinking: A Primer on the Foundations of Instructional Technology Assessment

There comes a point when nearly everyone involved in education begins thinking about computers and learning. On such occasions, it is easy to become distracted by simple assessment questions. D'es computer-based education work better? Do students learn more? Can computers replace teachers? Straightforward answers to these questions would make decisions about computers and education simple. Unfortunately, early work on instructional technology has shown that such questions are deceiving, and are not likely to lead to useful answers and clear decisions. In fact, they tend to cloud a complex issue.

We would like to help you leave the simple questions behind, and prepare you to think more carefully about assessment questions. First we must answer the question, "What are we assessing?" A clear understanding of the functional definitions of technology and instructional technology environments will shift your thinking away from the deceptively simple questions illustrated above to more profound issues. Recognizing these issues will help you design assessments that help educators unravel the complex world of technology, media, and learning.

Rethinking Technology

Instructional technology typically implies some kind of computer-based instruction, whether on the World Wide Web or on a CD-ROM. It may include alternative forms of computer-based instruction, like multimedia, or very basic drill-and-practice educational software. Sometimes the technology is not computer based, but is clearly typical of contemporary technologies. Video and television are just two examples of technologies used in education that are not computer based. The broad range of technologies used in instruction begs the question: Just what technology is being assessed in an instructional technology assessment? Software is a form of technology, computers are a form of technology, and video is a form of technology. What, broadly, is a technology? And what, specifically, is an instructional technology?

In the broadest sense, a technology "is a repeatable process, or able to be used repeatedly with the same consequences" (Lumsdaine, 1963, O’Neil & Baker, 1994). By this broad definition, many processes fit the definition of a technology. The process of rewarding good behavior to encourage more of the same is an application of technology. Exercising to improve physique and muscle tone is also an application of technology. Notice that in these two examples, what is identified as the technology is different from specific ways in which it is applied. For instance, many different kinds of rewards may help sustain good behavior, and many different kinds of exercises produce better physical fitness. Systematically rewarding good behavior invokes the technology of positive reinforcement, while exercising invokes the physiological process which develops stronger muscles.

This broad definition of technology directs how we need to think about instructional technology and assessment of its use. Foremost, it implies that an underlying instructional technology needs to be considered separately from the specific application and systems that deliver it (Clark, 1994). This point is skillfully elaborated in a seminal article by Case and Bereiter (1984).

Case and Bereiter trace a history of instructional technology without ever mentioning the words computer, television, or video. They describe instructional technologies in terms of the underlying repeatable steps required for learning. In their article, they discuss several different kinds of instructional technologies, and focus on two: behaviorism and cognitive development. Each of these instructional technologies can be viewed as a repeatable process–a technology–suitable for engineering a variety of instructional lessons. An instructional technology then, according to Case and Bereiter, is a recipe for designing instructional materials. Figure 1, below, describes the steps in a cognitive development approach to learning as described by Case and Bereiter.

This "technology" defines a method for designing instructional materials. Case and Bereiter tested this instructional technology in a clinical setting with a range of students, problems, and learning tasks (which ranged from long division to sounding out word problems). Their results illustrated that the "instructional technology of cognitive development" could be used to design successful instructional materials.

Rethinking the Instructional Technology Environment

The shift away from thinking of computers and software as the instructional technology has profound implications for identifying components of an instructional technology environment. It redirects the emphasis from the mechanics of delivering instruction (i.e., computers, televisions, teachers, and so forth) to the underlying properties of instructional technology systems. By this way of thinking, assessments focus not only on the instructional theory behind a lesson, but on the capabilities of the delivery mediums (e.g., computers, televisions, teachers), the media used in the instruction (pictures, text, sound), and the method or design used to create the instruction. It is a shift from face value differences to an appreciation of the underlying differences and similarities that truly define an instructional system. This shift, and the door it opens for assessment, has been explored most extensively by Clark (1985a, 1985b, 1994) and Bangert-Drowns (1985).

To formalize and capture this shift, Kozma and Bangert-Drowns first presented a two-dimensional framework, shown in Figure 1, below, that separates the delivery medium (e.g., computer or television) from the instructional method. The capabilities of the mediums are differentiated by the symbol systems they can accommodate and the control capabilities they offer. For instance, radio can present only audio-linguistic systems, while television may present pictorial symbol systems as well as both audio and visual linguistic codes. Control capabilities are the ways in which the mediums can process or manage information. Computers and teachers offer the most in the way of control capabilities.

Figure 1 - Medium and Methods

Medium

Method

Symbol Systems

Control Capabilities

Activating

Modeling

Short Circuiting

The three instructional methods are drawn from a taxonomy of instructional designs first proposed by Corno and Snow (1986). Activating designs prompt students to use cognitive skills, knowledge bases, and motivations they already possess, leading to new learning and applications of inert knowledge (Brown, Collins, and Duguid, 1989). Such designs typically feature the least structure and impose the largest information processing demands on the students. Modeling programs illustrate and guide, but still leave to the student much of the cognitive processing and learning. Finally, short circuiting designs take on the cognitive burdens for students incapable of taking them on themselves.

The Role of the Student in an Instructional Technology Environment

What this two-dimensional framework lacked was a careful consideration of how the different instructional technology designs (choices of medium and method in one instructional package) interact with students of different abilities and with varied educational environments. This is a serious issue for the design of assessments, as it could lead to an underestimate of the effectiveness of an instructional treatment (Kozma and Bangert-Drowns, 1987). For instance, some instructional technology designs might work well with certain kinds of students but be of little use to others. Unless this differential is isolated, positive results by one group might be dampened by poor results of a different group of students. To accommodate this likely possibility and to reduce the likelihood of a misleading result, Kozma and Bangert-Drowns added a third category, context. Context includes three components: the learner, the learning task, and the educational setting. For brevity’s sake, we will introduce only the student context.

Focusing on the role of the learner/student represents an attempt to use assessment as a measure of individual differences in students (Snow, 1989) in order to select the most useful instructional treatment for any student. Aptitudes in this setting are defined not only as broad psychometric properties like IQ, but in terms of how prior knowledge, cognitive abilities, and personality characteristics relate to learning environments and tasks. Richard Snow’s work illustrates how each may play a role during instruction.

Snow looked at the effect of film versus live lecture for teaching physics. Individual independent variables did not reveal much. For instance, the variable prior knowledge in physics, when considered alone, did not differentiate the effect of each treatment. When considered with measures of verbal and mathematics skill, however, as well as measures of attitudes towards the use of instructional films, significant interactions were observed. Prior film learning aided new film learning, particularly when students had little prior physics knowledge. A simple assessment question, "Is film better for teaching students?" would not have uncovered these differential positive effects. Looking carefully at differences among students is thus a key part of the assessment process.

Shifts in Thinking: Implications for Assessment

The two shifts described above, from thinking of computers as technologies to thinking of learning theories as technologies, as well as recognizing the differences between delivery systems and the overall instructional environment, have several implications for assessment of computer-based instruction and other computer-based education. First, they illustrate the need to identify the learning theory behind the specific instructional application, and to determine if it is an appropriate theory for the learning task. Case and Bereiter were able to show that cognitive development was useful for achieving certain kinds of learning. Other kinds of instructional technologies, like behaviorist technology, however, might be more suitable for different kinds of learning goals, like helping students develop good table manners or classroom decorum. Any assessment needs to identify the underlying instructional technology and to consider its usefulness with respect to the learning goal.

Secondly, underlying instructional models, such as cognitive development, suggest assessment strategies that help identify why some students might not be successful. One goal of assessments is to learn which step in the process is responsible when the process d'es not work. For instance, are some students using a sub-optimal or inappropriate learning strategy (step 2 in the cognitive development outline shown in Figure 2) These kinds of targeted questions are crucial if assessment is to help improve instruction, and not simply to indicate which of two instructional designs performed better.

Figure 2. Cognitive Development Technology for Learning
  1. Identify the task to be taught and develop a measure to assess students’ success or failure on it.
  2. Develop a procedure for assessing the strategy students employ on the task.
  3. Use this procedure to assess the strategies used by students at a variety of ages, both those where success is not achieved by current methods, and those where it is.
  4. Devise an instructional sequence for "recapitulating development", i.e., for bringing students from one level to the next, in the course of instruction.
  5. Keep the amount of material students need to attend to (i.e., in working memory) at each step within reasonable limits.
  6. When a students performance at one level becomes relatively automatic, move on to the next.

Finally, each one of these technologies–behaviorism and cognitive development–can spawn any number of individual instructional designs. Each can be implemented in a number of ways (i.e., by means of a computer or television, or by a teacher or another student). The technologies and the particular implementation of the technology must be considered separately.

Helping Faculty Appreciate Instructional Technology Assessment

Appreciating the more complex view of assessment outlined here takes time. How can a university help its faculty achieve these shifts in thinking? At Tufts University, the Academic Technologies department has instituted three grants programs to help faculty develop their interest in instructional technologies. The most basic of these programs is the Teaching with Technology Fellowship Program, which requires faculty to submit a grant that describes an instructional technology project they would like to pursue. Support comes in the form of funding, access to technical support (i.e., Power! Teams of Tufts’ best technical students and a lab of high-tech gear), retreats with educators and experts in assessment, and both individual and group meetings with fellows.

While it is wonderful if grantees generate high-quality instructional technology, the real goal of the program is to promote faculty development, especially the kinds of shifts in thinking outlined here. Our view is that for faculty to appreciate the complexity of creating and assessing instructional technology, they need to work through the process in their own terms, with what they believe are their own best ideas.

Steven Cohen ([email protected]) is the director of the Center for Assessment of Technology and Media in the Higher Education Experience, and Barbara McMullen ([email protected]) is the director of Academic Technology, both at Tufts University. Web: www.tufts.edu.

Featured