Innovation, Access, and the Universally Smart Classroom
L. Scott Lissner
The Ohio State University
As the Americans with Disabilities Act (ADA) Coordinator at The Ohio State University, I’m occasionally faced with faculty members who resist taking the time or effort to make their instructional content universally available to all students. They typically say it makes more sense to adapt the material once the student enrolls. I’d like to use this Viewpoint to document the number of technology-based innovations that began as “adaptive” technologies and ultimately became useful to all learners. I’ll conclude with a brief look at captioning software and sketch scenarios of how it will make my efforts at encouraging campus ADA compliance easier, and simultaneously serve the needs of all students enrolled at Ohio State University.
Technological innovations always have been recruited into the service of teaching and learning. Students learn at different rates, and they have a variety of learning styles and backgrounds. Teachers challenged to meet the needs of all students often turn to the latest technologies for solutions. Innovative applications of technologies in the classroom energize, motivate, enrich, and facilitate the process. Ideally, those who resist the principles of universal design for learning will be encouraged to reconsider what a truly “smart” classroom would do for all their students.
The birth of the “smart classroom” in the United States might be traced to 1801 when George Baron introduced the first large black chalkboard into a classroom at West Point. This may have been the greatest innovation for teaching and learning since the printing press. It effectively captured the moment and communicated print and drawings efficiently to large numbers of students. But not to everyone; even after Louis Braille published The Method of Writing Words, Music, and Plain Song by Means of Dots for Use by the Blind in 1829, his innovation was not adapted for reaching large groups of students in public settings. Today, “pixels” are clumped in different arrays to make meaning on digital displays.
In the second half of the 19th Century, Alexander Graham Bell set out to build a machine that would recognize voices to assist deaf individuals. Ironically, his efforts led to the telephone, a device that served to exclude deaf individuals until the development of the TTY in 1964. Today it is hard to imagine the cellular phone industry without text messaging, another adaptive technology that serves the deaf population, as well as every teenager on the planet.
In the 20th Century, the pace of innovation picked up. It took less than a decade to go from the customized ENIAC in 1946 to Remington Rand’s sale of the UNIVAC to the Census Bureau in 1951. Over the next two decades, the mainframe computing industry grew as the computers it produced got smaller. In 1968 Kenbak-1, the first personal computer, was advertised for sale in Scientific American – assembly required. Personal computing took off over the next decade and began to find its way into every facet of our lives. However, like the book and the blackboard, the personal computer communicated via the screen. Therefore it was inaccessible to the blind.
While the personal computing industry was getting started in the 1970’s, the ARPANET project was solving the problem of connecting various local computer networks . One of the major innovators on the ARPANET project was Vinton Cerf, a hearing impaired programmer who co-designed the TCP/IP protocol for Internet communication. He attributes frustration in communicating with other researchers and the effective computer based communications that he developed with his wife (who is deaf) as significant motivations for his work, and thus for the emergence of the Internet.
In 1974 a team lead by Ray Kurzweil created the first "omni-font" (any font) Optical Character Recognition (OCR) system. While flying, Kurzweil happened to find himself seated next to a passenger who was blind. This chance encounter suggested an exciting application of the new OCR software; creating a machine that could read print and address the limitation of the computer screen as the sole source of output. Developing the first “reading machine” required creating the first practical flat-bed scanner and improved speech synthesizers to complement the OCR software. Kurzweil’s work provided an important step in converting a visual medium to an auditory one and again spread benefits to the population at large.
In 1989 Ted Henter, a former motorcycle racer who lost his vision in an accident, developed software utilizing a speech synthesizer to navigate text based MS-DOS applications. Later he adapted the software to navigate Microsoft Windows, giving blind users practical access to the GUI interfaces with Job Access for Windows (JAWS).
My admittedly slanted history of the technological innovations leading to today’s smart classroom has emphasized innovations that first excluded a subset of individuals with disabilities, and then included them through further innovation. Where will this history take us next?
In all the cases described, assistive technologies that were developed to compensate for disabilities resulted in technologies to help all students learn. Technological developments work in the other direction as well. For example, during the 1940s and 1950s, the U.S. Department of Defense was funding research in voice recognition. Since large-vocabulary, speaker-independent systems were beyond the available computing power, the focus was on trained, or speaker dependent systems. In today’s environment of powerful personal computers, an instructor using Nuance’s Dragon NaturallySpeaking can train the computer to her voice and vocabulary effectively in 1-3 hours, a small investment of time. Thereafter the software can transcribe the lecture or generate notes for student use.
Take this scenario one step further. If the instructor videotaped the lecture, the transcription could generate a caption line. The captioned video file is not only a Web-ready resource, the text caption also provides a search tool to access the content by students, researchers, or other instructors. As with each new tool, we are again encouraged to rethink the most effective ways to reach the students who inhabit the classroom.
As the ADA Coordinator for The Ohio State University, I should take a moment to point out that these speech recognition improvements relate directly to compliance issues facing colleges and universities. The Americans With Disabilities Act and Section 504 of the Rehabilitation Act require that we provide access to our programs and services. While the application of the law to Web-based instruction and classroom technologies is still emerging, the direction is clear. Digital or not, “the Web” is a classroom and will require access. Section 508 of the Rehabilitation Act (legislation governing federal E & IT purchasing) is helping to shape the Web as a learning space.
Broadly, compliance with Section 508 requires that you consider how information is conveyed (text, audio, icons, video, graphic) and offer ways to communicate it in alternative modalities. The 508 standards would also have you evaluate how individuals navigate through information and establish alternatives. What could be more appropriate to the task of instructional design? On a more detailed level, 508 addresses clarity of visual presentations, ease of navigation, and perceptibility of information through its requirements for contrast, captioning, alternative cueing, and alternative controls.
Compliance, however, is just one benefit of using technologies such as Dragon NaturallySpeaking to support instruction. The standards for access typically make information and products more flexible and increase usability. A Web page that meets 508 or WC3 standards translates easily to the screen of a PDA. Captioned podcasts can be catalogued and easily reused. Voice output can provide pronunciations of technical and foreign vocabularies. Access is not about adapting materials for one student, it is about flexibility and adapting to many students. Adding depth and breadth to instructional delivery is how technologies make a classroom smart, and as this example illustrates, may lead us next into mobile learning environments as tomorrow’s smart classrooms.
L. Scott Lissner ([email protected]) is ADA Coordinator at The Ohio State University.