Beginning the Third Decade
When did you buy your first computer? Where did you buy it? How much did
you pay for it? When did you send your first e-mail? First visit the Web? Make
your first purchase from Amazon? First cite a URL in a blibiography or a course
syllabus?
These seemingly simple questions reveal a lot about our individual and institutional
odyssey in the world of information technology. The nascent microcomputers of
the late 1970s and early 1980s, including the first IBM PCs and Macintosh computers,
introduced what Steve Gilbert and I tagged in a 1985 Change magazine
article as “The New Computing in Higher Education.”
It has, indeed, been a journey. Technologies that did not exist or were simply
emerging in 1985—personal computers, notebook computers, cell phones,
PDAs, and the Web—today have moved from incidental to essential. These
technologies, and others now emerging (for example, wireless) have made the
transformation from costly conveniences to compelling, inexpensive, and ubiquitous
necessities.
We—and our students—want and expect more: more technology tools,
more digital content, more resources, more stuff!
That said, there is no question that our aspirations for information technology
continue to exceed our individual and institutional capacity to innovate with
and integrate technology into instruction and operations. The early adopters
among us seem to integrate, effortlessly, all the emerging tools and technologies.
In contrast, the rest of us are engaged in a continuing game of digital catch-up.
But even as our reach exceeds our grasp, what we can—indeed should—ask
is, “how far we have come over the past two decades?” And we should
also ask about the distance we have to go.
The easy metrics involve individuals and individual work: if you (like me!)
are “middle-aged” and “mid-career” (somewhere between
40 and 65), the digital shadows of IT are everywhere: e-mail, word processing,
PowerPoint presentations, course management systems, and instant messaging,
coupled with the emerging ubiquity of wireless technologies and video, all serve
as constant reminders of how much the work environment in academe (and elsewhere)
has changed over the past two decades.
The Web, in particular, has dramatically enhanced, what was until about 1995,
the largely unconnected desktop computer. The explosive growth of the Web has
provided fingertip (well, keyboard) access to an incredibly rich and constantly
growing array of resources that reside well beyond my office and time zone.
But for those of us in academe, there are also the instructional and operational
aspects of technology. Here the critical issues are far more difficult to measure:
instructional infrastructure and curricular deployment, as well as classroom
and organizational impacts and outcomes.
Up close and personal, I think about the experiences of my son and daughter,
one a college junior, the other heading off to college in fall 2004. They learned
about computers in elementary and middle school, and were sent to the Web for
information and resources for their class projects and term papers by the time
they hit their teens. Their teachers and professors have used PowerPoint presentations
in class and have included URLs in the syllabus. My children have textbooks
that include CDs. A course management system seems to be widely used at my son’s
college.
But are their classrooms—and is the classroom experience—so different
from what I experienced as an undergraduate three decades ago? To be sure, some
of the physical trappings are different: LCD projectors have replaced overhead
projectors and many students take notes on computers or PDAs. But the “in-class,
on-task” activities seem remarkably similar to my own experiences as a
college student: lectures, group discussion, and student presentations.
So how, then, do we address the continuing questions about technology and instruction?
What are the appropriate metrics for tracking the instructional integration
of information technology? Number of URLs in the syllabus? Use of a course management
system? Online content and assessment? Number of PowerPoint presentations? Hits
on the course Web site and average session time?
Do efforts to quantify aspects of IT in instruction accurately reflect
the all-important qualitative dimensions and impact of IT in the curriculum?
These are really important questions. Alas, we in the campus community don’t
have very good answers. As I stated in last month’s column, too often
the best we can offer is evidence by epiphany.
So here’s my prediction: much as the past two decades have been marked
by academe’s great aspirations for the role of technology in instructions
and operations, this decade may be marked by efforts to make institutions accountable
for the continuing (and rising) investment in IT. Inquiring minds—board
members and public officials, parents, and even some faculty—will focus
on two questions: (1)Why don’t faculty do more with technology? and
(2) Why don’t colleges and universities make better use of information
technology in campus operations and services?
As we enter the third decade of the “computer revolution” in higher
education, these seem like fair, timely, and, yes, admittedly difficult questions
that we in the campus community will have to address.