Seeing for People Who are Blind or Visually Challenged

It is perhaps a little-known fact that after about four years of not seeing, our memory of how things look begins to erode. This nascent state of "deep blindness" is described in John Hull's book, Touching the Rock—An Experience of Blindness, where he suggests that if our visual sense remains inert for about four years, "We will no longer remember which way the number 3 faces—or what the number 3 looks like."

Once on the brink of "deep blindness" myself, I have devoted twelve years of research at M.I.T.'s Center for Advanced Visual Studies to keeping alive even a damaged visual sense and an active part of my sensory apparatus.

As part of a pre-surgical eye exam, when I had no usable vision in either eye, I was introduced to the Scanning Laser Ophthalmoscope (SLO). The SLO was invented by Rob Webb, senior scientist at the Schepens Eye Research Institute at Harvard University, to allow ophthalmologists to observe a patient's retina during stimulation. Using a helium-neon laser and a complex system of optics, the machine scans images straight into a patient's eye while an infrared camera uses the same optical system in reverse to capture an image of the retina. Finding that I could see the "test pictures" projected onto my retinas—the first clearly identifiable images I had seen in months—I asked if I could try to see the word "sun." I did see it. Since that time, working with Rob Webb, I have become an advocate of using the SLO as a "seeing machine" in the hopes that it might allow others who are visually challenged to use their sense of sight.

Initially, our research will have the best chance to flourish in higher education settings where it can be modified to individual needs—settings where students and faculty value dialogue, multiple perspectives, and quality of life. Traditionally, programs for the blind facilitate daily tasks at the expense of challenging minds. Now, because of advances in technology, more people who are blind are attending universities. These students, as well as people who have age-related macular degeneration, are demanding enhanced creative lives and the chance to make substantial contributions to their community at large.

The Language of Seeing

Because of brain science and seeing-technology advances in the past decade, it is theoretically possible for almost everyone who is now blind to see "something," even if it is merely adequate visual stimulation to avoid "deep blindness." However, these chimeric glimpses still exist primarily only in university laboratories and research centers. Why? Because the focus has been on science and engineering rather than on a language of seeing. It requires enormous energy and commitment to look at anything, if one d'es not see. It is crucial that what those of us with bad eyes are asked to look at is legible, useful, even compelling—in short, worth the effort.

Our research focuses on creating a visual language and visual experiences for people with little or no sight. I use the SLO, which I feel is currently the best available seeing technology. I am also taking advantage of recent developments in virtual environments, haptic browsers, and communications technology. My goals are to develop networked libraries of visual experiences, and eventually programs, permitting people who are blind to communicate visually, thus encouraging shared perspectives that may be uncommon to 20/20 eyes.

The SLO as a seeing machine functions relative to the patient or viewer in much the same way that it works as a diagnostic tool, but when it is used as a seeing aid, the "test pictures" are replaced by simple images and graphics. The SLO seeing machine presents a bright, high-contrast image that can be seen even by utilizing a small portion of non-foveal retina. Another characteristic of the SLO is that it uses "Maxwellian view"—that is, it forms a small image of the light source at the pupil, resulting in a high-contrast image of uniform luminance that is undisturbed by defects in the cornea and lens.

Our SLO model, on loan from Canon U.S.A., performs far more operations than it would as a seeing machine. Based on our experiences with the SLO, we are building a seeing machine that will be far simpler, cheaper, smaller, and more portable than the diagnostic medical tool we are currently using.


Peony Afterimage is a retina print characterized by Goldring as a "frozen moment of seeing," replicated to show what she sees as she looks through the SLO.


After undertaking an extensive study of pictorial languages, I created my own visual language centered on short, basic English nouns and verbs, and on words indicating spatial relations. Often I combine graphical images with letters forming "word-images" that enhance word meaning and legibility. For example, the word "sun" is much easier to read if the s and n are separated by a sun graphic. This graphic not only enhances the meaning, but, by separating the curvilinear lines of the n and the s (omitting the u), the individual letters become more legible. I then use the word-images to formulate short texts in HyperCard. These animations are also legible through the SLO, if they are short enough (not more than four to six animated word-images).

Once we had hooked the SLO up to a Mac workstation, the next step was to connect it to the Internet. A first test using CUseeMe allowed my ophthalmologist to watch my retina looking at examples of my visual language and permitted me to see the faces of the people with whom I was communicating. If I viewed the faces successively, they were very clear, including the expression in their eyes. Since I usually see faces only as indistinct or featureless moons, this was a remarkable discovery. As I began looking at a variety of objects, I discovered that, as long as the objects were isolated and not too complicated, I could see flowers, faces, animals, and landscapes. I could perceive some objects through the SLO, although the system is high-contrast and I am able to use only a small portion of retina in one eye.

Using Virtual Reality to Create Spatial Animation

The possibility of spatial animation led to our first experiments with virtual reality and the creation of limited virtual environments for the SLO. Initially, using the Silicon Graphics (SGI) Imaging System enabled us to develop spatial word-image animations—S N O W, R A I N—and other visual p'ems falling in 3D VR space, as viewed through the SLO.


Goldring, seated at the SLO, is viewing an example of her visual language. Behind her is a live video projection of her retina looking at the word-image.


Virtual reality seems an appropriate visual tool for people who are visually challenged. Its limitations center on the computing power needed to render a realistic scene in every detail. Persons who are blind or visually challenged require fewer pixels than those who are fully sighted to achieve the level of information that they can grasp visually by looking through the SLO. Even with fewer pixels, they can translate this visual information into a believable, useful visual scene.

We have recently created a virtual model of a building at MIT that can be seen through the SLO. The idea is that people may pre-visit the building before going there in person, and that in the future, all public buildings will be accessible as virtual models over the Internet. For those of us who are visually challenged, pre-visiting a site would go far in alleviating the anxiety of going somewhere for the first time. Knowing where the steps are, which way the doors swing, what is immediately on the other side of the door, and where the elevators are and how the buttons are configured would greatly increase our sense of independence and confidence.

The virtual model of MIT's Building 68, which contains vital information and essential signposts, is visible through the SLO. However, the cursor that is normally used is not clearly visible. We have replaced it with a white stick. Instead of a mouse or a touch pad, our navigation system is the three-dimensional Phantom, developed by Sensable Technologies. This system allows the SLO viewer, or any viewer, to navigate through the environment and also to experience objects in the environment through force feedback. The Phantom mimics the white stick in the way it senses the environment, except that the tactile sense is reinforcing the visual sense. We are currently working on an environmental audio component to support the visual experience.

Our Research and the Internet

We are looking forward to testing our model of Building 68, along with the seeing machine prototype that we are constructing and also a modified Phantom navigation system. By 2002, we expect to test the workstation, first at MIT and then at rehabilitation centers and low-vision clinics in the Boston area. The next step is to create Internet libraries for the visually challenged, accessible with seeing machines like the one we are building. The step after that is to develop interactive visual communications programs for people who are blind and visually challenged. I firmly believe that our experience will have applications for everyone suffering from visual information overload and from the avisual characteristics of the Internet (although it is primarily a visual tool). Better, more legible visual organization of Internet Web sites and offerings is badly needed and may be enhanced by our research as we test, refine, and modify our design.

Featured