Keynotes >> Tackling the Big Questions
UC's Turbulent Times and the Role of IT
UC’s Kristine Hafner connects the dots between growth and fiscal
challenge—and they lead to IT.
With a perspective gained only by working in the president’s office
at one of the largest educational organizations in the world, UC Associate VP
for Resources and Communications Kristine Hafner gave Syllabus2004 attendees
the “big picture” about the role of IT in higher education. In the
following excerpt of her July 19 keynote, Hafner examines how the University
of California is wrestling with strategic planning during a time when
enormous growth projections clash with a nearly $2 billion budget shortfall.
Can IT help?
Things are at a turning point for the university. We are at a place where we’ve
really never been before. California was a steaming economy up until 2001, at
which point it crashed. The state as a whole lost $12.5 billion in revenue.
In 2002, the university was still in a very steady state, projecting even growth,
moving ahead, with a budget that would essentially reflect that growth. Now,
though the state is beginning to project slight increases in revenue over time,
we have an enormous structural problem in that the spending will always, at
least for the foreseeable future, outstrip what the state is projecting in revenues.
That backdrop is problematic for the University of California as a whole.
At the university, we now have almost a $1.6 billion shortfall, at the end
of the fiscal year ’03 to ’04. The budget that is on the table now
will take another $372 million out of the university’s budget, bringing
us to almost a $2 billion shortfall. We have plans to cut freshman enrollment
for the first time ever, we are cutting faculty spending, we are raising fees
across the board, and we are delaying the opening of the new UC-Merced
campus. And yet, the University of California is one of the biggest contributors
to the state of California’s economy.
Who really suffers as a result of this picture is, of course, our students.
Student fees get ever larger. More and more of the responsibility for the funding
gap, the structural deficit that we have, is placed on the shoulders of the
students.
Back in 1999, the University of California as a whole pulled together a group
of leaders across the system, and produced a report that looked ahead at where
we needed to go with our administrative systems infrastructure. When we published
the new business architecture report in 2000, we were just at the beginning
of an enormous growth curve in enrollment. The report asked: How are we going
to accommodate the amount of new growth, when we understand that the administration
of the university will never be funded in step with enrollment growth?
Even then, what we saw was increasing risk, as the student enrollment grows
and the administrative infrastructure probably stays funded at the same level.
We needed to look at a different, more scalable model to ensure that students
coming to campus get the support that they need. The new model was inspired
by several things: the recognition that you have to look at re-engineering business
processes from a number of different points of view; that technology has to
be a critical enabler for a new way of doing business; and that people need
to have tools accessible to them at the desktop, out in the departments where
80 percent of the work gets done (and 80 percent of the costs occur).
UC’s Turbulent Times At-a-Glance
10 campuses
5 medical centers
3 national labs
200,000 students
Proposed budget of $14.5 billion
Potential shortfall
of $2 billon
Out of all of that, in the framework that we created, you’ll see the
portal in the center. Most of our campuses have embraced integrated portals,
to bring applications and tools and technologies together at the desktop. But
we also realized that you have to address the underlying business processes
and the policies associated with them in order to be more effective and efficient.
So, we move business to the Web through integrated portals. We also look at
the training and skills development of people in those areas and foster enabling
technologies. We have provided more integrated financial information. UC has
10 different financial systems with distributed general ledgers, so we have
a huge issue of how we get information from our campuses up to a central reporting
facility. The idea is to do some corporate data warehousing development that
pulls data from the campuses. Finally, we track performance of our academic
and business units over time. These components together created the new business
architecture context.
The framework was not prescriptive. There was a lot of excitement around the
fact that leadership across the institution came to agreement on this set of
principles, creating a framework; each campus could then figure out what it
means for them. Each of the campuses has taken the concepts and molded them
into their own campus-based, new business-architecture project, to fit their
own culture; this is one of the great successes.
We made a report to the Board of Regents in 2004, addressing what has happened
with the new business architecture, and how we are looking forward. The budget
picture I mentioned earlier explains why we see a slightly different focus now.
Leadership now is concerned less with using the technology to drive process
change—now, it’s: How do we manage our risk? How do we cut money
out of the budget? How do we become more efficient? And how do we do things
one time and not 10 times, for 10 campuses? We are looking for leverage. As
a large institution, how do we leverage our size and purchasing power? Will
this model hold us steady through both huge growth and through retrenchment
(which is really where we are right now)? Is the model scalable enough to be
successful?
And we realized that the regulatory burden enormously affects our ability to
be effective and efficient; we are saddled with so many processes. That’s
another indication that our focus has shifted because of the times.
Many new, system-wide initiatives came out of this new architecture, particularly
with respect to the role of technology. Self-service via the Web is an overriding
theme that all of our campuses have embraced. So, centrally, in the Office of
the President, we find benefit systems, and payroll systems
these are
systems that are centrally developed.
We are moving those things to the Web
so that employees throughout California can access and update their own information.
And we want to have the same types of Web services available for students. Pathways
is a Web-based tool that is being used by about 90 percent of our admissions
applicants—a single application interface via the Web. UC for Yourself
is a suite of tools for employees—the whole UC community—that we
are continuing to enhance. The issue of electronic forms—automating workflow,
digital signatures, and how to move from paper-based processes to automated
processes—these are the major themes; self-service processes to be delivered
via the Web.
Another area that the university as a whole is pursuing is this notion of common
solutions. It’s very difficult at the University of California to have
prescriptive solutions or to mandate that all the campuses go in one direction.
Really what we are saying here is that we are looking for the collective interest,
and where it is better for us to do things together one time than it is to do
them 10 different times in 10 different ways. For example, as large research
institutions, we have the issue of how we report effort on federal contracts
and grants. This past year, five of our campuses and the Office of the President
jointly funded the development of a new effort-reporting system. Also, we are
doing a lot with not only jointly developing, but also sharing of Web content
and training tools, and all of our campuses are really invested in moving things
to the Web. Another example is our federated identity management project, in
pilot, and for which we now have three campuses and the Office of the President
working closely together (this is a Shibboleth project) to demonstrate that
we can have a federated identity management model for the University of California
as a whole.
Finally, how do we leverage the size of our university? One example is that
we have involved all of our campuses, the medical centers, and the laboratories
in a consortial software licensing effort. We have seen enormous financial benefits—a
$100,000,000 in savings already—from our new software licensing consortium.
We are also doing strategic sourcing on other levels—desktop technologies,
for example—to try to aggregate the spending. Some of the campuses are
looking at consolidating and recentralizing things that were decentralized years
ago. This is a challenge, and in some cases, controversial. But people are realizing
now that we have created such a complex environment to manage, particularly
with the security concerns that we have—that it can actually be considered
an unmanageable environment. There is a lot of thinking now about the idea of
changing the model; not recentralizing, but centralizing what makes sense, and
leaving decentralized what makes sense. The California Digital Library (CDL)
is a fantastic example: It d'es not replace the campus libraries, but augments
and partners with the campus resources to create a common set of digital resources
that are there for everyone—including the public of the state of California.
Yet another great example of taking our resources and pooling them in support
of something larger is CENIC, the organization that provides network services
to the university. UC was one of the founders, along with the Cal State system,
the community colleges,
Stanford, Cal Tech,
and USC. There is a whole range of network services that we’ve
invested in and continue to enhance. The capability that is here for our research
and teaching enterprise is absolutely phenomenal. California is also part of
National LambdaRail, the national fiber optic infrastructure supporting the
research enterprise.
So, for the new business architecture, we had taken a very broad look at what
we needed to be doing going forward, and yet, we realize that it’s still
a challenge within the university and among the leadership to determine just
what is the role of information technology? Let’s recognize that
we are truly an information-based organization, and that we need to represent
that in how we speak to not only to our core competencies, but where we are
heading in the future. If we don’t start focusing on having the right
information technology infrastructure in place, it’s going to be harder
for us to talk about how to map that to the institutional mission. It is not
always clear to the leadership of our institutions in general that IT needs
to be at the table and needs to be an integral part of the planning in our institutions
as we move forward, particularly now in the year 2004.
Can the Internet Survive
MIT’s Jeff Schiller reflects on the governance of the Internet, and
the endurance of the global network as we know it.
As MIT’s network manager since 1984 and a longtime participant in
national networking initiatives and the international networking community,
MIT network manager Jeff Schiller has seen it all—from
the early research of the ARPANET, to the everyday communications of today’s
global Internet. In this July 21, 2004 keynote, Schiller looks at the governance
of the Internet and its future survival, and asks: Who’s in charge here?
We have a real issue coming up with the Internet. You hear about the latest
computer worms and spam, and you think, “Can the Internet survive?”
But there is a deeper threat. The Internet has many stakeholders. I’m
from the early generation; the guys who helped build the Internet. We had a
certain culture about us; a certain set of ethics, which, unfortunately, are
not shared by the majority of today’s Internet providers—lots of
users share these values, yes, but not the providers.
The Internet was born on January 1, 1983. That was the day when the ARPANET
muckity-mucks said, “Thou shalt speak TCP/IP, or thou shalt not speak
at all.” That forced us to change from the old technology, Network Control
Protocol (NCP), which allowed computers to talk to one another on the ARPANET.
But in those days, a port on the ARPANET cost $80K—and those were bigger
dollars. And, you could connect exactly one computer on the port. But
with TCP/IP, that one port could connect an entire campus.
So, once the Internet was started, we had exponential growth. By 1988, when
the Internet worm first showed up, 60,000 computers were connected to the Internet,
of which roughly 6,000 were infected. It was the first time the Internet made
it into the mainstream press. And it was the first time I could explain to my
parents what I did for a living.
Then, around 1991, we had about a million hosts on the Internet. After that,
it became too hard to count the growing numbers. Today, when we talk about how
many people use the Internet, we refer to percentages of populations of countries.
Like China. It’s mind-boggling to a geek like me that the Internet has
become the communications infrastructure for the planet.
The original culture of the Internet was largely one of Internet researchers.
We didn’t have security problems—except for the occasional researcher’s
child having fun with mommy or daddy’s modem. Interestingly enough, the
purpose of the original Internet was, at first, to allow researchers in one
location to talk to a computer someplace else; then, it evolved into the ability
to move data from one computer to another. But using it as a way to communicate
between people was, in fact, always forbidden: E-mail in those days was the
forbidden application. Today it is the killer application. So, if today
you say that peer-to-peer (P2P) is the forbidden application, then
you can complete the sentence
But what I really want to focus on today is governance: how do we govern the
Internet? We don’t know the answer; this is a work in progress. It is
the kind of problem that wars are fought over; it’s no measly little thing.
Originally, governance was provided by ARPA. They chartered the Internet Activities
Board (IAB), a panel of principal investigators who took ARPA funds to do networking
research. Later, ARPA tried to phase themselves out when the Internet stopped
being cutting-edge research. NSF stepped in: The Internet’s first transition
was from a group of network researchers trying to figure out how to build networks,
to a bunch of researchers trying to use the network to further other disciplines.
At that point, the IAB became independent, with an invitation-only membership;
for the most part it was run as a self-serving boys’ club. Yet they did
a wonderful job, spending hours cogitating over the end-to-end model of networking,
and how to do routing so that the network could get bigger. The challenge of
the Internet was always, can it scale? And the protocols have, in fact, evolved
to do just that.
I like to describe the Internet as the cliff we are always running to. But
we are putting up scaffolding and building the cliff faster than the guy running
at it can go. Occasionally we got pretty close, and we kind of held on with
our fingers to the edge, but we always made it.
The IAB remained self-selecting until an open, participatory body—the
Internet Engineering Task Force (IETF)—grew out of the IAB. All you have
to do to participate in the IETF—and it’s true today—is to
join the mailing list of its working groups and go to its meetings if you want
to.
So who owns the Internet? The answer is, everyone and no one. Every organization
controls its little piece. We all control our network; ISPs control theirs.
So we have ISPs that have a fixed rate per month and an acceptable use policy.
AUPs can be reasonable, or they can be horrible. Think of Rollerball
—an
awful movie, in which there was depicted a future where corporations ruled the
world. What is the primary value of a corporation? It is expedience, and the
primary reason for a corporation is to make money for its shareholders. So when
you see a cute company logo, such as, “We make living better
”
—No, that’s not their primary job; the primary consideration of
the company is to make money, not how they make the money. Bottom line:
It becomes governance by contract.
Take a look at your typical cable modem AUP. It may say something like, “You
may not run any server we don’t like,”—but they don’t
tell you what those are—and, “We can terminate your service for
no reason, whenever we like.” These are the kinds of clauses you’ll
find. Internet providers are not common carriers; they don’t have to provide
service, and they can deny service for whatever reason they want.
“It’s mind-boggling: The Internet has become
the communications infrastructure for the planet.”
One ISP put words in its contract that said, “We can read your e-mail.”
And it’s actually broader than that: It said that, “We can monitor
all of your traffic, including e-mail.” And then it went so far as to
say, “But if in the process of reading your e-mail we learn of some intellectual
property you have, we automatically get rights.” That was in the contract—and
it wasn’t a small organization, it was IBM.net. Now, I can speculate that
they were probably worried about somebody bringing a suit against them, saying,
“Hey, IBM, that technology you just came out with, we really own that
and you must have learned about it by reading our e-mail.” So they were
probably trying to neutralize that kind of defense. Still, there it is.
Another ISP actually had in its acceptable use policy that you may not disparage
them. So, if I put in a posting somewhere that “I have service from this
ISP and it stinks,” I have violated the AUP—at their sole discretion,
mind you. In fact, everything in that AUP was “at their sole discretion.”
Let’s look at Internet constituencies. You have Internet service providers,
and Internet users that I call residential users, and enterprise Internet users,
universities, governments, media companies, technology vendors
Who is
represented? Governments are represented, because they have the guns. Enterprise
users have their forums. Internet service providers have NANOG in this country
(really global). And you have the IETF, which brings together Internet service
providers, technology providers (like Cisco), enterprise users, media companies
But guess who’s missing? You have nobody representing Internet users.
Coming back to the point of my talk—Can the Internet Survive?—media
companies think the P2P nature of the Internet is evil. They want to be content
providers, and they want users to sit on their couches. In fact, a cable vendor
in its AUP says that I can’t run a Web server on my machine at home. Forty
bucks a month?
You don’t get to have a Web server. Now, their interest,
of course, is just trying to share the bandwidth equitably, and I don’t
really object to that. But media companies have a different spin: They don’t
want me to be able to host a Web site unless I pay a lot of money. So I get
scared when I think about media companies buying cable modem vendors. Maybe
the motivation of the ISP will shift from making sure that the bandwidth is
shared equitably, to “making sure we protect our media business.”
And, of course, the thing they really want to go after is P2P file-sharing,
which they consider a threat.
This changes the Internet; it is no longer the Internet that we knew it to
be. This path leads to the end of the Internet and the ability to come up with
new, innovative applications, whether it’s audio, video, interesting instructional
technology
Imagine if you would, that you want to do some kind of innovative
distance learning, and reach out to consumers, through some interesting application
that wasn’t a Web browser, but you can’t because only Web browsers
are allowed. That’s the end of the Internet. What can we do? We have to
keep Internet P2P, even when it hurts. Even though on your campuses, P2P file-sharing
is not pleasant; it uses up bandwidth, and it costs you money. But if you try
blocking it, you’re taking a step toward that tombstone. We must maintain
the end-to-end connectivity principle.
Push back on ISPs that overreach. Negotiate. Amen, that we still have competition.
There may be a shakeout in ISPs in our future, and we may not have the competition
we have today. Let’s use that power of competition while we have it. You
don’t have to buy from a lousy ISP. Put it in your contract that they
will not monitor your traffic. Do not bow to pressure from media companies.
Digital Learning Cultures in the Information Landscape
Clifford Lynch considers the shifting relationship between scholarship
and CMS.
As executive director of the Coalition for Networked Information, Clifford
Lynch has an exceptional view of the changing landscape of information in the
digital realm. In these excerpts from his July 19 keynote at Syllabus2004, Lynch
takes a look at the changing relationship between scholarly communications and
the course management system.
Let me sketch the familiar landscape in higher education, circa 1990. We had
academic computing, which went through a strange sort of hollowing out and distribution
process during the ’80s and early ’90s. As we moved away from centralized
academic computing, many institutions became involved in supporting distributed
workstations and infrastructure components like the network. We had administrative
computing, which had a big infusion of money in the late ’90s, partially
because of the Y2K boogie man. And then we had library computing, which I would
say by the early mid-’90s was legitimately recognized as another leg of
the computing stool. That included online catalogs, circulation systems, interlibrary
loans—a set of rather stable functionality that had been in place. We
had, starting again in the early ’90s, a certain amount of construction
of what were called digital libraries—mostly digitized special collections.
You had eReserves coming online. This was basically a pretty static world; lots
of incremental evolution, but not too many new species being introduced.
Sometime in the late ’90s things really got strange. All of a sudden,
fundamentally new things got loose in this environment. You can see them reflected
in the organizational chaos and variability that characterizes our organizations’
responses to them. The learning management systems were very prominent. As so
often happens, these things came in a bit stealthily. But, very rapidly, they
became institutionalized. We started seeing administrative pronouncements that
pushed these more and more prominently into infrastructure. They rolled out
very quickly—faster than we had designed policies to manage them.
At about the same time, we moved on from our first modest efforts in licensing
digital content from publishers, and digitizing local content, to the point
where by the late ’90s institutions started to build up really massive
collections of journals in digital form, and digitized monographs and special
collections. In the last few years we’ve seen the establishment of a set
of programs called institutional repositories. These are basically distribution
and stewardship services that universities have established to: (1) collect
faculty work for dissemination and management; and (2) record the cultural life
of the campus.
The faculty work reflected a set of trends that we’ve seen captured in
discussions of cyber infrastructure in support of science and engineering and
of parallel efforts in support of humanities and social sciences. In the UK
and most of Europe they talk about eScience and eScholarship, which I think
somewhat better captures the issues here. The fundamental nature of scholarly
practice and scholarly communication is changing. It has already undergone a
phase change in most of the sciences and engineering. The change is more fragmentary
in the humanities and social sciences, but the nature of this change is that
it exploits computation and collaboration enabled by technology. It exploits
digital content and very sophisticated large-scale sensor systems and builds
heavily on datasets, computation, and simulation as an integral part of the
scholarly process. The broad palette of scholarly communications is now much
more than our traditional system of scholarly publication can handle.
Institutional repositories represent a place to store and manage this material;
to make it be “not at risk” in the sense of keeping it alive and
accessible. These kinds of systems have also shown up and transfigured the information
landscape.
Are we being too narrow in our definition of learning management systems? Today,
we think of learning management systems as support tools for teaching and learning
processes highly connected to the formal instructional activities of our institutions.
But the clear distinction, the wall separating teaching and learning from scholarship,
is extremely artificial. In fact, if you look at our graduate programs, that
wall mostly d'esn’t exist. You see pushing against that artificial wall
initiatives at some of our universities to more directly engage undergraduates
with faculty research, to construct learning experiences and environments that
are much more discovery-based.
That leads me to wonder if learning management systems shouldn’t be thought
of as just the beginning of a something that generalizes collaboration environments
in which both learning and research can take place. Perhaps terms like “environment”—which
invite us to think about environments populated by a range of tools from a range
of sources—may be a better future picture of where these learning management
systems evolve, rather than having scholarly collaboration systems as yet a
new silo, distinct from learning management systems.
“The wall separating teaching and learning from
scholarship is extremely artificial.
Here are a couple of questions we need to ask about learning management systems:
Where do they fit, and what are the interfaces? I can tell you, for example,
that libraries didn’t pay much attention to the development of learning
management systems for quite some time. Then, about two years ago, a series
of issues showed up that got the libraries’ undivided attention.
All of a sudden, we started to hear complaints from faculty who wanted to integrate
material—either material that they captured, or often times, material
that the institution had licensed, that was part of the digital collections
that were hosted by the library. They wanted to bring that into the course environment,
and that process turned out to be rather awkward. It also implicated the lack
of systematic authentication/access management systems at some of our institutions.
Also, that if some of this material was licensed and you wanted to use it within
the context of the course management system, you got into the issue of having
to fire off multiple authentication and authorization mechanisms.
Those were some of the problems brought out, but part of the solution got the
libraries really nervous. At least on the commercial side, you started to see
positions getting established at some of the learning management system vendors
that dealt with the licensing of content. All of a sudden, deals were being
cut to license content that lived in the silo of the learning management system—things
that historically might have comprised course readers or databases out of which
course readers could have been constructed. The libraries realized that they
had spent 10 or more painful years developing expertise in writing and negotiating
licenses for digital content on behalf of the institutional community.
Now, here was another potential silo, where a different group, not synchronized
at all with the work the library was doing, might be re-licensing the same material,
and might be doing it under a set of terms that were inconsistent with the policy
positions that the library had painstakingly hammered out on areas ranging from
preservation to privacy. Resources were too scarce, the policy problems around
digital content were too complex, to make this vulcanization of the acquisition
of digital content acceptable. This suddenly got libraries deeply engaged in
the question of learning management systems.
Now, I will say that we’re still at the beginning of a set of policy
positions that I suspect are going to get additional reconsideration. Let me
just point you at one that I think is rather interesting. Textbooks had been
something that students purchased, handled by the university bookstore or other
bookstores in the vicinity of the university. And there was this whole nastiness
about the used textbook market, which the publishers hate. Textbook prices,
by the way, have been running up quite a bit over the past years. I would direct
your attention, in case you missed it, to a report that a number of student
groups put out earlier this year, called “Rip-Off 101,” which was
a study of some of the publisher practices of putting out what were fundamentally
spurious new editions of textbooks, in which they did exciting things like renumbering
exercises—destroying the resale market for used copies or earlier editions.
There’s a lot of irritation about the price of textbooks.
There are also issues about the cloud of digital materials: study guides, stuff
that g'es in learning management systems, supplementary exercise materials,
and faculty support material that surrounds them. As we start looking at more
and more reliance on these learning management systems, I think we are going
to have to reopen the question of the institution’s involvement in the
licensing and management of contracts surrounding textbooks (in some cases).
This is an area that libraries have really stayed out of. Yes, they may buy
a single copy of a textbook as part of their reference collection, but typically
they make no attempt to get involved in the licensing of textbooks as opposed
to reserve material to support classes. This is just one example of how I expect
things to get re-negotiated as we go forward.
It’s clear we need a set of linkages and interfaces that facilitate the
incorporation of digital content from a vast multiplicity of sources, both by
reference and by copy, into our learning management systems. There is also a
lot of discussion about what other interfaces we need to make to library services,
and I would say that the jury is still out as to what other library services
we need to extend into the world of LMS. For example, do we need a button in
the course management system course site, to directly issue searches into library
catalog systems—rather than opening another window? Do we need direct
linkages for virtual reference? There are experiments underway in these sorts
of things. Another set of issues is whether we need a librarian presence, essentially
as a member of courses that are hosted on the learning management system, in
the same way that teaching assistants and faculty have a presence. There has
been some work on that area in some institutions.
But at some level, those seem to be the easy questions. The really ugly policy
questions go the other way: the export of material from learning management
systems. What is exportable? Where is it exported? For what purpose and how
long? Who gets to see it? Who is responsible for it? I would suggest that this
complex set of issues is much more difficult at a policy level than simply extending
library services and content into our learning management systems or our scholarly
collaboration environments, going forward.
“The really ugly policy questions [involve] the export
of material from LMS.”
So...these are interfaces not just to make content importable; equally important,
they are to make content exportable and manageable. We need to recognize that
one of the key results of the deployment of learning management systems, whether
they are being used to support fully net-based teaching and learning, or whether
they are intended to complement or supplement to face-to-face teaching and learning
activities, is that they are forcing us to engage with a new, very complex type
of digital information object that has legitimacy and importance as a record,
as a teaching and learning tool, as a new mode of scholarly communication.
Circle back to my comments about the notion of a learning management system
as opposed to a broader scholarly collaboration environment. As we look at these
broader scholarly collaboration environments, we recognize that the act of collaboration,
of working together and analyzing data and authoring together, simultaneously
produces a record that we can annotate, share, archive, distribute, recall,
and that that indeed is a new genre of scholarly communications. There is a
significant lesson there, for how we think of LMS.
We are in a very different information landscape today. This is a landscape
where we are challenged to break down the walls of sil'ed information services
and sil'ed information and data resources. There is no system, no potential
silo, more important to focus on than learning management systems—and
the relationship to scholarly communication environments.