The Network: Enterprise Technology Underpinning
- By Annie Stunden
- 05/01/04
Sitting at the core of enterprise campus infrastructure, the network supports
myriad information resources and services.
The network is very much on my mind these days. The network is the real underpinning
of the campus enterprise technology environment. Almost everything else we do
with technology on our campuses is dependent on a reliable high-speed network.
And it really isn’t “a” network or “the” network.
On most campuses we have a network of networks.
Depending on the campus size, several-to-hundreds of local area networks may
be connected to the campus backbone network. A substantial modem pool may also
be part of the campus network environment and additionally, the network may
support DSL and cable modem service.
In addition to this wired campus network environment with the huge Web of fibre
and cable, wireless network capabilities are also now part of our portfolio.
And for many of us, the wireless capabilities on our campuses happened when
we weren’t looking. Someone in a campus department bought a little wireless
modem and plugged it in. Folks found it and began to use it. Other folks rolled
their own. And so, the wireless environment may have grown like a weed.
Next thing you know, you are identified in one of those Web sites that lists
free wireless access points in your community, because you have been found in
a drive-by. And even as you try to build out a wireless environment that requires
authentication and authorization, those rogue wireless hot spots keep popping
up. This is an area where campus policy may be helpful. We probably need to
batten down the hatches on our wireless networks.
Quality of Service (QoS)
refers to the capability
of a network to provide better service to selected network traffic. The
primary goal of QoS is to provide priority to certain network traffic by
dedicating bandwidth and controlling jitter, latency, and loss. QoS also
assures that priority for selected traffic flows d'es not make other traffic
flows fail.
And what are we using this campus network of networks for? In last month’s
article, I painted a picture of the way we look at our technology environment
at UW-Madison. Just about everything we do is dependent on a well-functioning
campus network. All of the computers use the campus network for communication.
No more SNA network specifically for the mainframe and its terminals—everything
is in the Ethernet world.
All of the enterprise applications supported are dependent on the network for
getting the information they need and returning the information the user needs.
Our back-up and recovery processes are dependent on the network. Our print processes
are dependent on the network. Our students and faculty and staff now require
the network to go about their work of learning and teaching and research and
administration.
On my campus, we are building out a new high-speed network. Our network had
not been a campus-coordinated effort, but rather a patchwork of local area networks
held together by a backbone that had no idea what it was supporting. Our fibre
and cable plant had been installed early and now was not adequate to support
a more contemporary, integrated, high-speed network. However, things mostly
worked until we attempted to move high-speed video from one local area network
across the backbone to another local area network (from one campus building
to another).
It became clear that this was not the no-brainer that everyone thought it should
be—that our network of networks didn’t support interoperability
at a high level, and that QoS (Quality of Service) was simply not possible in
this patchwork world. We also did not have the network capability to support
the kind of research that can currently be done effectively in a grid computing
environment—an environment important to our campus research community.
We had a network for 1990, not for 2000.
Reliability of the network is also critical to the enterprise. In a meeting
with our university deans, we discussed whether it was more disruptive for the
telephone system or the network to go down. The deans indicated that they were
much more dependent on the network, e-mail, and the Web than they were on the
telephone. Unfortunately, our phone systems are still more reliable than our
networks. We need to work toward that 99.999 percent up time for our networks,
with down time carefully scheduled. The Ivory Soap model is not pure enough.
We cannot hesitate to put the necessary resources to planning, building, and
managing our campus networks. The rest of our enterprise technology portfolio
depends on it.
Next month, I’ll write about how our campus network connects to the wide
world—our state and regional networks, the Internet, Internet2, and the
newest component of the advanced research network infrastructure, National Lambda
Rail).