Open Menu Close Menu

How Much Reality Does Simulation Need?

Today's students are immersed in a world of images that draw them into multi-sensory experiences. These are often provided by various entertainment genres, from video games (individual or multi-user) to movies. Young people and old find the engagement compelling, which has lead to the burgeoning gaming industry and laments from the English faculty about the deterioration of linear narrative.

Developments in computer graphics have brought a new realism to video games, movies, and simulations. Blending reality with a suspension of physical constraints made possible by computer simulation has given rise to characters such as Spiderman, who swings by a thread through the canyons of Manhattan. We perceive that experience unfolding as "real." Now, while we certainly remember these scenes from the cinema, if the same computational power were applied to learning would the impact be as powerful?

Chris Dede at Harvard has been studying the impact of adding multi-sensory perceptual information to aid students struggling to understand complex scientific models. He and his colleagues have built virtual environments such as NewtonWorld and MaxwellWorld to test how they affect learning. Providing experiences that leverage human pattern recognition capabilities in three-dimensional space (e.g., shifting among various frames-of-reference and points-of-view) also extends the perceptual nature of visualization.

Their work has concentrated on middle school students who have not scored well on standardized tests of scientific understanding. Among the questions they are investigating is what the motivational impact that graphical multi-user simulation environments have on learning. These environments include some or all of the following characteristics: 3-D representations; multiple perspectives and frames-of-reference; multi-modal interface; simultaneous visual, auditory, and haptic feedback; and interactive experiences unavailable in the real world such as seeing through objects, flying like Superman, and teleporting.

What have they found? With careful design, the characteristics of multi-dimensional virtual environments can interact to create a deep sense of motivation and concentration, thus helping students to master complex, abstract material.

This might suggest that the more realistic the virtual environment becomes the better the learning. Maybe. Of course, these technology-infused approaches to learning are the modern day version of John Dewey's assertion that students learn by doing. Translated into today's computer-enhanced learning environment, the rich perceptual cues and multi-modal feedback (e.g., visual, auditory, and haptic) that are provided to students in virtual environments enable an easier transfer of simulation-based training to real-world skills (Dede, C., Salzman, M.C.; Loftin, R. B.; and Sprague, D., 1999).

The situation is, in the words of the famous "8-ball" predictor of the future, "decidedly mixed," however. In the mid-1990s several studies found that guidance was critical in deriving the greatest value in simulation exercises. When students in a graphically rich multi-sensory learning activity do not know how to complete a given task, the learning process can get bogged down as the students search for some method of completing it (Hill & Johnson, 1995). If they do not have a clear understanding of the tasks they are being taught and asked to learn, they may perform them erroneously without realizing it, and thus learn the task incorrectly. Furthermore, the students might acquire incorrect models of the systems with which they are interacting (Self, 1995).

What to do? One approach is to support collaborative learning, in which groups of students work together within the virtual environment. Having students work together is frequently an effective way to enhance learning. However, a group of students who are equally unfamiliar with the concept being taught duplicate and reinforce each other's mistakes. Students who rely too much on the assistance of their colleagues may excessively interrupt and divert their attention, and slow the progress of the group.

Another approach is to have intelligent software agents that provide guided assistance to the student. This has been done in training for complex machinery, where a virtual autonomous agent demonstrates tasks, offers advice, and answers questions (Rickel, Stiles, and Munro, 1998).

Increasing the realism of graphically rich virtual environments, however, may not always translate to better learning. Developers of flight control environments have found that pilots are flooded with data. The information they need to fly their planes, however, is only a select subset of the information available. In fact, learning complex flight maneuvers may be impeded by the distractions of simulated reality and accelerated if only relevant information is provided, with the rest filtered out.

When training operators to remotely operate unmanned air vehicles (UAVS), researchers looked at the effects of the level of automation on the number of simulated remotely operated vehicles that could be successfully controlled by a single operator (Ruff H.A., Narayanan S., and Draper M.H., 2002). Their findings revealed that as the number of remote vehicles increased, the operator's ability to pilot the drones decreased when they flew them manually.

However, if the autonomous controls made the flight decisions, and the controllers had only to correct the automated choices, they reported both higher levels of stress and performed poorly. Having the UAV autonomously make flight decisions, but requiring the operator to confirm the decision before they were made resulted in the highest performance. Apparently giving the operators selected tasks, but automating major parts of a complex activity lead to maximum performance.

Creating simulations with life-like realism is increasingly possible with computers growing in power every few months. Like many things with technology, just because we can do something may not mean we should. But there is one thing we could do to augment the multi-user virtual environment. We could add … a teacher!

MacWorld
Apple introduced two new laptops at MacWorld, one aimed at filling the conspicuous absence in the lightweight portables (4.6 lbs) and one bringing to laptops an enormous 17-inch screen. Apple used the Austin Powers Mini Me character (Verne Troyer) and Yao Ming, the 7-foot center for the Houston Rockets to debut the new laptops side-by-side (www.apple.com/hardware/ video). The new laptops add some nice ergonomic features—light-sensitive key illumination—with a bold bet on the direction for wireless (802.11g built-in, skipping 802.11a altogether).

More surprising was Apple's entrance into the browser arena with Safari. Optimized to run on Jaguar (Mac OS X 10.2x) Safari offers some new features. It can read a Web page aloud or automatically generate a summary of a recently viewed page. But the primary claim to fame from Safari is speed. Based on the open source Konqueror browser engine, a part of the K Desktop Environment, a graphical user interface that runs on Linux and Unix systems, Safari is leaner (smaller source code) and faster (both in start up and page loads) than anything yet introduced, at least according to Apple.

These are well-engineered and sophisticated laptops. Whether they are too little too late is the question for Apple. Some financial analysts appear to be non-plussed. The day of MacWorld, Merrill Lynch issued a sell recommendation. Almost simultaneously Prudential Financial issued a "hold" rating on Apple Computer. Looks like Wall St. is just as uncertain by the future of Apple as the rest of us. Sometimes the "8-ball" says, "Outlook cloudy, try again later."

References

Dede, Chris; Salzman, Marilyn C.; Loftin, R. Bowen; and Sprague, Debra. "Multisensory Immersion as a Modeling Environment for Learning Complex Scientific Concepts," in Nancy Roberts, Wallace Feurzeig, and Beverly Hunter, Computer Modeling and Simulation in Science Education, Springer-Verlag; 1999.

Hill, R. W., and Johnson, W. L. (1995). Situated Plan Attribution, Journal of Artificial Intelligence in Education, 6(1), 35-67.

Rickel, J, Stiles, R., and Munro A. (1998). "Integrating Pedagogical Agents into Virtual Environments." Presence-Teleoperators and Virtual Environments, 7(6), 523-546.

Ruff HA, Narayanan S, Draper MH. (2002). "Human interaction with levels of automation and decision-aid fidelity in the supervisory control of multiple simulated unmanned air vehicles." Presence-Teleoperators and Virtual Environments,11(4): 335-351

Self, J. (1995). The ebb and flow of student modeling: Proceedings of the International Conference on Computers in Education (ICCE '95), 40-40h.

comments powered by Disqus