A Balancing Act? Openess and Security on Campus
MIT’s network manager and security strategist Jeff
Schiller comments
about security on today’s campus networks.
Syllabus: How do you balance
the demand for today’s higher levels of security with the traditional
openness of the higher education computing environment?
Jeff Schiller: You’re making
an assumption that openness and security are on opposite ends of the spectrum
and that you have to choose between them. If you look at the security problems
we have today, they’re in fact not due to the openness of the network.
They’re due to the software that people run.
S: Do you mean the end-user software?
JS: Yes. I mean basically every
computer connected to the Internet. Put another way, you should not depend on
the network to provide protection for your computer.
S: What about the firewalls and
network security software?
JS: The firewall was never an
integral part of the Internet architecture. Firewalls developed because end-host
software wasn’t secure. A lot of software, particularly on PCs, was designed
in the days before networks. It was designed to run on personal computer hardware
that was not very sophisticated when the PC first appeared. So, putting protections
into end user software is difficult. Then add to that various marketing pressures
for
years security was simply not a priority. When you went to vendors, Microsoft
and others, and said, “You should be putting some time and effort into
making sure the software is not buggy as all-get-out,” the answer was,
“Well, we promised we’d ship by Friday, and you know, this is Internet
time. We’ve got to get this stuff out there—nobody cares about security
anyway.”
And so, with that kind of history, firewalls got developed because quite frankly,
the network managers were told to “do something,” and that’s
the kind of thing you can do. But don’t mistake that to mean that the
only way to have a secure network is to have a network that is restricted or
closed.
S: Do you really think it’s
possible to design a personal computer operating system to handle the security
issues that now require firewalls and other complex network security measures?
JS: Yes. You own a Macintosh.
How often d'es your computer crash?
S: Very rarely.
JS: The reason it d'esn’t
crash all that often is because system software developers took some time and
effort to make that the case. If they would take the time and effort to make
it be secure, it would be secure.
S: So what is keeping that from
happening?
JS: Some people have said that
the problem is we don’t have normal product liability laws with software.
Now, if you buy an automobile and pieces fall off as you’re driving out
of the parking lot, the onus is on the dealer to make good. If you buy a toaster
and it breaks, it’s the manufacturer’s problem to fix it. But you
buy software, and the license says, “We’re granting you a right
to use this software, but if it d'esn’t work, or if it destroys all of
your property or your career, we owe you nothing.”
S: So there are no strong warranties
with software
JS: Well, they may say, “We
warrant that the CDs the software comes on are made of plastic and they won’t
physically crack in the first 30 days.”
S: Right!
JS:Now, the problem is that if
you decide to put liability upon software authors, you destroy open source—because
those people can’t tolerate any liability. So, if I were king, I would
rule that if you’re selling software then you bear a certain liability;
but if you’re giving it away in open source, then you don’t.
But, I fear that the commercial interests in this game, if they felt that Congress
was backing them into a situation where they would have to accept liability,
my guess is they would strenuously lobby that liability applies to everything,
including open source, in an attempt to kill off open source. So that’s
the conundrum.
S: How would it work then, if
the commercial software authors were liable and the creators of open source
were not?
JS:Open source is a fundamentally
different thing. With open source, if there’s a problem I can fix it as
the consumer. Obviously I have to have the skills to do that, but I do have
the ability—the access—to do it. With closed source I don’t.
I’m literally at the mercy of the vendor to fix it.
Jeff Schiller
Jeffrey Schiller is Network Manager at MIT and
has managed the MIT Campus Computer Network since its inception in 1984.
Prior to his work with the Network Group he maintained MIT’s Multics
timesharing system during the timeframe of the ArpaNet TCP/IP conversion.
Schiller is an author of MIT’s Kerberos Authentication system. From
1994 through 2003 he was the Internet Engineering Steering Group’s
(IESG) Area Director for Security, responsible for overseeing security-related
Working Groups of the Internet Engineering Task Force (IETF). He was responsible
for releasing a U.S. legal freeware version of the popular PGP encryption
program.
Schiller is also responsible for the development and deployment of an
X.509-based Public Key Infrastructure (PKI) at MIT. He was the technical
lead for the Higher Education Certifying Authority operated by the Corporation
for Research and Educational Networking (CREN) from 1999 until 2003. Schiller
is a founding member of the Steering Group of the New England Academic
and Research Network (NEARnet). NEARnet, now part of Level3, is a major
nationwide Internet Service Provider. |
S: But realistically, is that
happening? Do all the people who are running Linux boxes have better security,
or add in better security?
JS: I think Linux is much more
secure than a lot of the other stuff that’s out there, because so many
people look at the source code—not everyone looks at it, but enough people
do, so that problems get fixed earlier, rather than later.
S: What are the major security
risks in the higher education environment that network users face from viruses
and worms, and, given all that you’ve just mentioned about end-user systems
being weak, what are the most reasonable protections?
JS: One of the things universities
have is legions of students with poorly managed computers that don’t have
patches installed, and those machines are likely to be compromised by any of
the worms that are running around the network right now. That’s one view
of it.
Another view is the risk to administrative data. Situations vary a little bit
from institution to institution, but one problem is that administrators use
the same type of poorly managed PCs that the students use. If they put sensitive
information on those desktops, it can be compromised. But exactly how at-risk
each institution is depends a lot on how it manages data and what its various
policies are.
I would also go somewhat out on a limb and say that the more sophisticated
research-oriented universities are probably in deeper trouble.
S: Why would that be?
JS: Speaking as a network manager
at an institution with Nobel laureates, it’s harder for me to set policy
and make it stick. The more famous your faculty, the more they’re in charge.
And the more the faculty can do whatever they want, the more chaotic your network’s
going to be.
S:So how do you manage that—do
you have a firewall?
JS: People have often asked me,
“Could you firewall MIT?” And, you know, I don’t want to and
I think it’s the wrong thing. Even if I wanted to, my faculty would not
permit me. Or more to the point, the faculty would say, “Yes, sure,”
but as soon as they couldn’t do something on the network, they’d
say, “Take out the firewall” or “Put in an exception so I
can do what I want to.” Firewalls that are filled with holes because somebody
wants to do something quickly become useless.
“The more famous your faculty, the
more they’re in charge. And the more
the faculty can do whatever they want,
the more chaotic your network’s going to be.”
|
S: If not a firewall, then what
is your strategy?
JS: There is one good technique,
and it’s the only one that’s effective. No firewall, no port blocking—none
of that will work. The solution is that you must install patches.
S:Patches for each and every
PC, then...
JS: If you own a PC, you must
install patches. You must pay attention. And, and if you’re running a
more modern version of Windows, things like automatic update can help. I’m
going to give Microsoft some credit there. They’ve tried to make the installation
of patches as painless as possible. But it’s still something that you
have to sign up for.
I might add, by the way, firewalls don’t protect you against these worms.
Because once a worm gets on the other side of the firewall, then the firewall’s
useless. For example, at one point the State Department’s visa processing
system got one of the worms. And you can guess there’s a big firewall
between that and the Internet. In fact, I’d be willing to bet that thing
is not even connected to the Internet. And yet one of the worms got through
to it. Probably by somebody taking a laptop, connecting it to the public Internet,
catching the worm, unplugging the laptop, coming to their office, plugging it
into the secure network and boom, now the secure network has the worm.
That’s why I say firewalls are not useful.
S: What about attacks specifically
directed at the network itself?
JS: Well, the only worms we’ve
seen that have “hurt the network” do so by creating so much traffic
that the network gets overloaded.
There have been occasional distributed denial of service attacks against network
infrastructure. About a year or two ago there was an attack against several
of the root servers. Since then the root, DNS servers have been hardened. They’re
actually a lot harder to attack now than they were then. It’s amazing
we have to learn these lessons one at a time. Now, there is a lot of attention
to protecting the infrastructure of the Internet itself against attack. I’m
less concerned about this than I was a year ago.
S:I wanted to ask about the state
of encryption technology—the safety of data going over the network.
JS: Encryption technology, and
sometimes ciphers are out there—the cipher du jour. People should either
be using what’s called triple DES, the data encryption standard applied
three times over in a subtle way, or AES, the Advanced Encryption Standard that
NIST sponsored that has been out a few years. Encryption is not the problem.
Getting it deployed into actual products is where the challenge remains, and
how to do that in a way that mere mortals can handle. There are places where
encryption has been so successful you don’t even know it’s there.
The classic example is HTTPS on the Web.
Go to any major eCommerce site—where you enter your credit card number,
that transaction is encrypted. And you don’t even have to put a lot of
effort into worrying about how that’s happening. You see the little lock
icon, and you’re good. So there’s an example of where encryption
has been very, very successful.
S: Where could it be used more
effectively?
JS: A place where it’s not
been very successful, and where we could really use it in certain applications
has been e-mail. There have been several attempts to try to do encrypted e-mail.
The protocols are designed. There are many systems out there, but they’re
just a little too hard to use for the average person, so they don’t tend
to get used very much. We could really use some innovation to get encrypted
e-mail up and running.
S:What would be the most needed
or most useful applications of encryption on campus?
JS:Any time you’re going
to send sensitive information over the network you want to encrypt it. For example,
at MIT our students and their advisors can look at grades online. We made sure
that they use a Web-based application with HTTPS so that we are covered with
the encryption.
And in fact, a real benefit in the education space is that the more things
you can offer as Web services, the easier it is to provide security.
S: So it seems like institutions
that have Web-based portals could certainly take advantage of HTTPS.
JS: If the portal software supports
secure connections, yes, they can. Now, there’s a subtlety here. Just
because you encrypt the information going over the network d'es not mean that
you can run insecure software on the servers.
Because of the widespread use of encryption on the Web, we don’t have
people stealing credit card numbers by intercepting Web traffic and decrypting
it, because that’s too hard. Instead, they break into the actual servers
and steal thousands of credit card numbers—or lots of other data—at
once.
So the important thing to remember about putting sensitive information on the
Web is that you have to make sure the Web server software you’re using
is secure and up to the task. I don’t want to say, “Oh, use the
portal; you’re fine,” because I suspect that there’s probably
a lot of portal software out there that’s not particularly secure.
S:Are there any other weaknesses
to keep in mind, particularly when accessing data on the Web?
JS: This gets into engineering
implementations. The devil is in the details. Let me give you an example. There’s
a Web site out there—I won’t identify them—that offers survey
services. You can set up surveys and revisit them to see the data collected
or to edit them. But if you look closely at the actual URL in the little bar
at the top of your browser, you will see some long number.
A few of us wanted to know, “Well, wonder what happens if we go into
that title bar there where the URL is and just add one to that number?”
And we did so, and all of a sudden we were looking
at somebody else’s survey, and seeing their answers. The devil is in the
details.
S: What’s being done on
campuses to protect student privacy and to comply with FERPA?
JS: Well, FERPA covers what we’re
supposed to do, but it d'es not specify the level of protection you have to
provide. It basically just says you’re supposed to protect student records
and not willingly give them away to the wrong people. But it d'esn’t really
go into what the standard is and how much protection should be offered. I’m
sure the protection is different at different schools.
The biggest threat to student records is how your administration is set up.
Universities are at risk if they let administrators with insecure machines download
sensitive data and leave it on their hard drive. Some worm could come along
and actually mail it out.
I mean, a lot of people, when you talk about security for administrative data,
they’ll immediately say, “Well, you know, our registrar’s
database is protected this way, that way, and the other way.” And you
say, “Well, that’s all well and good, but where d'es that data go?”
In fact, there’s an interesting conundrum here though it’s not as
bad as the one I talked about before. This one’s about human nature.
You will have a situation where the administration will protect the data, and
they’ll say, “Well, we’re going to only allow authorized representatives
of the registrar’s office to get to the registrar’s data.”
Kind of makes sense, right? And then a researcher comes along and says, “Well,
in order to do my research, I need this much information about these students.”
And what would happen is, rather than giving the researcher access to the actual
database where the data resides, someone will extract the data for the researcher
and e-mail it to them on a spreadsheet. Chances are, it sits on the hard drive
of the researcher and probably sits on the hard drive of the person who did
the work for the researcher. So, in an attempt to restrict who can get at the
data we wind up with the data getting splattered everywhere.
“There’s a need
for security but that d'esn’t mean we
sacrifice openess to get it” |
S: So education is a part of
this?
JS: Education is a part of this,
both for the people who own personal computers and work with the data and for
the people running these systems. If somebody is eventually going to get the
data [legitimately], maybe you would like to grant them access in the place
where the data resides and simply say to them, “Don’t keep copies
of this on your hard drive.”
But if they can’t get to the data themselves they’re much, much
more likely to keep a copy on their hard drives, once you e-mail them a copy.
Because they think it will be very hard to get another copy.
S:
So who’s going to educate
the users?
JS: Well, it’s a slow process.
I think when a school gets burned they tend to have a lot of running around
in-house and they try to figure out how it happened, and then everybody gets
security-conscious for a year or so
before they go back to their old
habits. One of the fundamental problems is that security is very hard. And what
makes it hard is that it’s a negative deliverable. You really don’t
know when you have it. You only find out belatedly when you’ve lost it.
S: So getting back to the original
question about openness versus the need for high security
JS: I don’t think it’s
either-or. There’s a need for security but that d'esn’t mean we
sacrifice openness to get it.
S: So would you say that you
need a balance of both?
JS: Yes, you need a balance of
both.
[Editor’s note: Jeff Schiller will give a keynote at
the Syllabus2004 summer conference, to be held
in San Francisco, July 18-22.]