To Ephiphany--and Beyond!
Productivity has been in the news a lot over
the past few months.
Economists, politicians, and pundits all cite productivity as the leading factor contributing to the jobless economic recovery.
Read beyond the headlines and technology emerges as the official explanation
for the recent productivity gains in the American economy. Admittedly, many
of us are working longer and assuming additional responsibilities for no increment
in pay. But our extra efforts notwithstanding, Federal Reserve Chairman Alan
Greenspan and other sage observers tell us that the current productivity gains
reflect, in part, returns for the corporate investment in information technology
over the past two decades.
In this political season it is probably best to let politicians argue over
the numbers and nuances of recession and recovery. Then again, I always thought
that Ronald Reagan, an economics major at Eureka College (Class of ’32),
had it right: a recession is when your neighbor loses his or her job, while
a depression is when you lose yours.
However you feel about the current economy, economists have a fixed and firm
definition for productivity. My daughter’s econ text offers this definition:
“the amount of product produced by each unit of capital or labor.”
Technology, we know, contributes to productivity because it enables us to produce
more or better products or services at a constant or reduced cost, i.e., with
fewer “units of capital or labor.”
Interestingly, it took a long time for Greenspan and company to find evidence
of the technology bang for all the corporate bucks spent on IT over the past
two decades. For example, consider the billions that corporations spent on desktop
computers, software, and training during the first 10-12 years of the current
IT revolution, from the IBM PC to the beginnings of the Web (1982-1994). The
year 1994 is important because that’s when corporate spending on information
technology finally surpassed corporate expenditures on manufacturing technology.
Yet it took more than a decade, well into the late 1990s, before Greenspan and
other economists could proclaim a real return on investment (ROI) for the corporate
spending on IT. Of course, economists have a fixed and firm definition for productivity.
Which leads us to the meaning of productivity in education. I suspect that
most of us in academe struggle to define productivity for our sector of the
economy. We generally prefer not to view our work in terms of inputs, outputs,
or products. Ours is a true profession, a calling if you will. We may “produce”
knowledge, scholarship, and instruction, but we don’t manufacture these
“products” and typically resent any effort to categorize our work
in this manner.
In economic terms we know how to improve the instructional dimensions of “academic
productivity.” For example, we could contain salaries, transfer more of
the teaching load to assistant professors or part-time faculty, or increase
class size. By definition, any of these strategies would make academe “more
productive” because we are changing (reducing) the ratio of inputs (capital
and labor) to outputs (number of students taught).
Of course, these simple strategies fail to address complex quality issues or
key outcome measures, such as student engagement and learning.
Enter technology. Like others, I know, really know—in my head
and heart—that technology makes me more productive. When I gave up my
typewriter for a personal computer more than two decades ago, the technology
provided a competitive advantage: I could write (and rewrite) papers and proposals,
create graphics, develop and update project budgets, and prepare conference
materials better and faster than without the computer, and better and faster
than my peers who did not use a computer.
And technology as an instructional resource? Here’s where things get
messy. Yes, technology—from film and television to online content and
interactive simulations—can aid and enhance instruction and learning.
But we do not have a clear definition for instructional productivity or precise
methods to measure student learning and outcomes. At the classroom, program,
and institutional level, we do not have firm definitions and consistent measures
to assess what we do with IT resources or the impact of institutional IT investments
and deployment efforts.
The absence of consistent metrics and definitive research—comparable
to the data used by economists to measure productivity or pharmaceutical companies
to document the benefit of new medicines—means that we occupy an ambiguous
gray zone. We are left, knowlingly or not, citing former Supreme Court Justice
Potter Stewart’s 1964 opinion on pornography; he couldn’t define
it, but he knew it when he saw it.
So while we many not be able to define academic productivity, we know it when
we see it, or more precisely, when we experience it. In other words,
we have evidence by epiphany.
Unfortunately, evidence by (individual or institutional) epiphany fails to
provide the much-needed data and documentation required to respond to questions
about the impact and benefits of technology in instruction and institutional
operations. We need more than a voice vote of the faculty senate to confirm
that IT makes a difference.
For me, the conceptual map charting the impact of IT on instruction and curriculum
was published more than a decade ago. Writing in Change magazine (Jan./Feb.
1991), and summarizing five years of research on IT and the curriculum for the
National Center to Improve Postsecondary Teaching and Learning at the University
of Michigan, Robert Kozma and Jerome Johnston were ahead of the curve (and the
Web!) in their discussion of key IT issues affecting the curriculum and the
continuing IT challenges affecting faculty, academic programs, and institutions.
Their 1991 article, “The Technological Revolution Comes to the Classroom”
also calls for “systematic assessment” focused on what, in 2004,
continues to be the need for evidence about “which innovations make a
difference in teaching and learning” and the “need to understand
the connection between educational computing, learning, and teaching.”
Data from the 2003 Campus Computing Survey reveal that only a third (33 percent)
of U.S. colleges and universities have campus initiatives to “assess the
impact of IT on instructional services and academic programs.” Consequently,
it is not surprising that we continue to rely on individual or institutional
epiphany for evidence about the impact and benefits of information technology
on teaching, instruction, student learning, and outcomes.
Chairman Greenspan’s statements linking corporate investment in IT with
productivity are important for us in education. His statements confirm that
infrastructure fosters innovation and that infrastructure enhancement requires
sustained investment. It also means that we need consistent data and continuing
assessment efforts. Finally, given the short half-life of IT hardware and software,
Greenspan’s pronouncements about technology and productivity suggest that
IT is really an operating cost, rather than a capital cost.
The current financial challenges affecting American higher education have led
some college presidents to talk about “doing more with less, and doing
it better.” In contrast, the quest for evidence about IT beyond epiphany
means that we must, simply, do more assessment, do it better, and begin doing
it now.