Security Technology >> Hack Job
The best way to avoid security breaches might be to pay for them.
When a hacker broke into the network at George Mason University
(VA) earlier this year, IT officials were absolutely powerless to stop him.
Within minutes, the hacker compromised the school’s main Windows 2000
server and gained access to information that included names, Social Security
numbers, university identification numbers, and even photographs of almost everyone
on campus. Next, he poked around for a back door into other GMU servers that
store information such as student grades, financial aid, and payroll. Finally,
the hacker tried to crack passwords for other machines—machines in just
about every department on campus. Curtis McNay, a system administrator who manages some of the university’s
computing systems, saw the whole thing happen. After the break-in, McNay told
the Washington Post that he knew from data streaming across his monitor that
a break-in was going down. By the time the hack was halted, however, it was
too late. Information surely had been copied; privacy most certainly had been
breached. And after a week of investigating the scope and nature of the electronic
break-in, university officials reluctantly sent an e-mail warning 32,000 students,
faculty, and staff members that they were all vulnerable to identity theft or
credit card fraud.
“It appears that the hackers were looking for access to other campus
systems rather than specific data,” Joy Hughes, the school’s vice
president for information technology, wrote in the e-mail blast. “However,
it is possible that the data on the server could be used for identity theft.”
Talk about nightmares. For an institution designated as a Center of Academic
Excellence in Information Assurance Education by the National Security Agency,
the hack attack was disastrous. But the debacle was only the latest in a string
of hack attacks against higher education institutions. In the last two years,
similar attacks have occurred at the University of Georgia,
the University of Texas at Austin, the University of
Missouri at Kansas City, the University of California-San Diego,
and the University of California-Berkeley, to name a few. In
all of these cases, the hackers exploited vulnerabilities in technology set
up to foster collaboration and the free exchange of information. Across the
board, the hackers scored sensitive information, putting users at risk.
These cases may not represent the norm across North America, but increasingly,
US schools are feeling the need to step up security measures to protect their
users from invasions of this kind. Most schools take a traditional approach,
purchasing the latest and greatest Intrusion Prevention System (IPS) technology
from vendors that serve the corporate world (see box below).
Playing it Safe
Of course, the safest way to secure a network is to do it the old fashioned
way, with a smorgasbord of security products from a variety of leading vendors.
Some of the hottest technologies on the market today are Intrusion Detection
and Prevention (IDP) solutions that monitor "normal" network behavior
at the network gateway and stop anything that they deem out of the ordinary.
Also popular are Secure Socket Layer (SSL) virtual private networks (VPNs),
which provide secure tunnels for remote connectivity, and deep-pocket inspection
firewalls, which scan individual packets of information for viruses, worms,
and other protential threats at the network edge.
Undoubtedly, the best devices are those that combine all three of these technologies
into one. This new category, dubbed Unified Threat Management (UTM) by Gartner
analysts (www.gartner.com)
in a December 2004 report, is comprised of jack-of-all-trades firewalls that
incorporate SSL VPN and IDP technology into the appliance--products such as
the FortiGate line from Fortinet (www.fortinet.com),
and the REM Security management console from eEye Digital Security (www.eeye.com).
Even vendors such as Symantec (www.symantec.com),
Check Point Software Technologies (www.checkpoint.com),
and Cisco Systems (www.cisco.com)
have unveiled products along these lines, as well. Guess it's never too late
to help customers harden their networks.
Others prefer to handle security on their own, combining off-the-shelf tools
with proprietary measures, to keep things safe. And some of these trailblazing
schools champion a strategy that employs the services of “ethical hackers”
to poke around a network to find vulnerabilities for system administrators to
fix. While few schools actually admit to employing these good-willed hackers,
many experts say the method is a reliable way to stay one step ahead of the
bad guys, and two steps ahead in the security game.
“Attacks can be prevented if administrators have more knowledge of where
their networks are vulnerable and what they can do to defend them,” says
Benjamin Sookying, former director of Network Security Services at California
State University-Long Beach, and now manager of Network And Security Operations
for IT solutions provider Pacific Blue Micro (www.pacblue.com).
“At the end of the day, security really is just a question of managing your
knowledge of the network.”
For roughly $2,000 per month, 'ethical hacker' Paul Tatarsky has virtually eliminated hack attacks from the UC-Santa Cruz School of Engineering network.
Confessions of an Ethical Hacker
If anybody understands Sookying’s comments, it’s Paul Tatarsky.
Tatarsky has been an ethical hacker for the better part of 12 years, mostly
breaking into the School of Engineering network at the University of
California-Santa Cruz
. Tatarsky started his career as an ethical hacker
on staff as a network administrator for the department; then moved into an outsourcing
role as a technician for CounterSign Software (www.countersign.com).
Today, he works for the school again, conducting contract ethical hacks remotely,
from his home in Madison, Wisconsin. (Because the term “hacking”
has such a negative connotation, however, Tatarsky prefers to call his line
of work “auditing.”) When it’s time to test the security of
the UCSC network, Tatarsky simply heads to his home office in the basement,
sits down at his personal computer, and hacks away.
After spending so much time “auditing” the network at UCSC’s
School of Engineering, Tatarsky’s approach is, at this point, formulaic.
With a set of scanning tools such as Nmap (available from the site www.insecure.org),
he enumerates—that is, physically counts—all network systems and
services running on the network. Next, he performs a Nessus vulnerability scan
(download from www.nessus.org)
against the systems found to be running “targets,” such as Web servers,
database servers, or in particular, Microsoft networking products. He compares
the output of the Nessus findings with a collection of constantly changing security
flaws (listings of flaws that have recently been exposed elsewhere) and proof-of-concept
exploits (actual hacks that have occurred elsewhere)—a collection that
he keeps by staying current on mailing lists, and by watching for “insider”
advisories. Finally, in a controlled environment, Tatarsky uses a tool called
VMware (www.vmware.com)
to conduct further “tests” of the exposed services, with the exploits.
This tool mirrors applications on certain hosts to enable him to attack a virtual
machine, instead of the real thing. If the exploit is a threat, he’ll
take preventative measures such as adding a firewall, applying patches, or adding
a firewall rule.
“I’m just a system administrator, trying to keep my machines relatively
pain-free,” he says, with a shrug. “What I do is analyze and test
several pieces of posted and captured hacking code a day, against systems that
I know from audits may have the type of flaw the code claims to exploit.”
These known exploits are at the center of everything Tatarsky d'es. He says
he focuses mostly on what are called “remote exploits,” in which
an individual on a remote system can gain privileged access to a system on another
network. Some of the resources he uses to find these exploits include:
When he tests published exploits, he looks to see if an exploit is actually
a Trojan (which looks innocuous when it comes over network transom), how it
works, and whether it creates any further vulnerabilities that he can use in
his audit programs. Tatarsky also checks to see how easily the exploit can be
converted into “worm mode,” which enables a quick set of commands
to follow the attack and download or install backdoor software (also known as
“bots”) to allow remote control of the target unit (Web server,
desktop, etc.).
When he tests unpublished exploits (exploits that occurred elsewhere, on similar
systems, and which frequently become known because the hackers themselves brag
about the hacks in postings), Tatarsky performs a fair level of network analysis.
He looks for what he calls “new things,” with Intrusion Detection
System (IDS) technology such as that from Snort (www.snort.org).
With this tool, Tatarsky watches for unusual packets and patterns of use that
might lead him to the new viruses and threats. While he d'esn’t claim
to inspect every packet, he notes that certain activities draw his attention
to a possible “prove” or “test” of an emerging security
flaw. For example, Tatarsky says he’s noted several “public exploits”
being tested against his systems before they showed up in various forums (again,
hacker bragging). In one such instance, he documented and recorded a successful
compromise of several of his own institution’s Sun UNIX machines, and
managed to disable the service in the School of Engineering before any havoc
was wreaked. Elsewhere on campus, other departments weren’t as lucky.
Generally, in fact, this is the case. During the last round of hack attacks
at UCSC, the School of Engineering experienced zero infected systems, while
the rest of the campus had more than 150. In the Sasser worm outbreak of 2004,
the School of Engineering lost 15 machines to the attack; elsewhere on campus,
other departments lost more than 200, all-told. Finally, during the sizable
Linux attack reported at the University of California-Berkeley and Stanford
University (CA) in the second half of 2004, literally hundreds of machines
on these campuses were crippled by the hack, but at UCSC’s School of Engineering,
nobody has found a compromised machine yet. For roughly $2,000 per month, Tatarsky
has virtually eliminated hack attacks from the School of Engineering’s
network.
“Have we been lucky? Perhaps,” he says, carefully choosing his
words because, as he puts it, hackers like to “knock off” people
who get too cocky. “But I believe the combination of audit and regular
patching based on emerging vulnerabilities saved the department considerable
labor costs and down system costs.”
Naysayers Weigh In
Still, not every school supports ethical hacking. At Carnegie Mellon
University (PA), for instance, technologists openly campaign against
it, maintaining that hiring someone to help lock down a network directly contradicts
the school’s commitment to an open computing environment. Pradeep Khosla,
dean of the School of Engineering, admits that some CMU technologists engage
in “redteaming,” or penetration testing, when they are building
software programs, but insists that the school has never hired people specifically
to redteam its network as a whole. Khosla says that most of the vulnerabilities
hackers exploit are in commercial software programs that no amount of ethical
hacking can fix. With this in mind, CMU set out in 2003 to develop a state-of-the-art
network that can teach itself how to survive an attack.
The project is the mission of CyLab, CMU’s public/private partnership
to develop new technologies for measurable, available, secure, trustworthy,
and sustainable computing systems. Working closely with the CERT Coordination
Center (a CMU-based and internationally recognized center of Internet 4 security
expertise) as well as the US Department of Homeland Security National Cyber
Security Division, CyLab boasts as its mission the goal of “protecting
all computer users from interference by cyber terrorists and hackers.”
As Khosla explains, to proactively improve security, the endeavor is designed
to focus on the successes instead of the failures—much of what the immediately
reactive strategy of ethical hacking fails to address.
“Building technology isn’t like building a bridge,” he says.
“With technology, there’s a lot that we simply don’t understand,
and it’s up to us to take that approach, as we improve security for everyone.”
IT leaders at Purdue University (IN) hold similar opinions
about ethical hacking. There, at the school’s Center for Education and
Research in Information Assurance and Security (CERIAS), Director Gene Spafford
encourages students and administrators to expand security efforts to include
issues of policy, architecture, and number of servers. When asked about ethical
hacking, Spafford says that instead of identifying “18 different”
weaknesses in a particular protocol, he supports a network that denies the protocol
access in the first place. He adds that while ethical hacking might address
vulnerabilities of the moment, the strategy fails to proactively address weaknesses
down the road.
Instructors race through topics like symmetric versus asymmetric encryption, and
hacker attack behaviors. If the students pass the test, they get the ultimate
seal of approval: Certified Ethical Hacker.
His suggestions? First, Spafford urges administrators to separate (that is,
not integrate) systems with different policy and security needs, and to isolate
those important systems that don’t need to be hooked into the network.
Next, he suggests that administrators use switched hubs (which ensure that data
gets from one point to another), virtual private networks (VPNs), and redundant
network channels, instead of running connections over one-way ports and unencrypted
lines.
He advocates the use of off-the-shelf consumer products such as firewalls
and IPS tools, but notes that the key to success with these is staying on top
of patches, and incorporating them with proprietary security measures. Last,
Spafford calls for depth in a network, noting that the more systems a hacker
has to hack to get to the heart of a network, the more secure that network really
is.
“In the sense that most systems will eventually be penetrated, this
notion of ethical hacking begs the question of ‘Why Bother?’”
quips Spafford, also a professor of computer sciences. “It d'esn’t
make any sense, because by the mere nature of systems architecture, you will
always find flaws.”
Down the Road
Trends, however, may be proving Spafford wrong. A number of schools are beginning
to offer classes in ethical hacking to get their system administrators up to
speed in frontline network defense. A new class at Mt. Sierra College
(CA), for instance, is designed to teach students how people will try to break
into network systems—and how they will succeed. For $4,000, the course
prepares corporate-level students for an exam offered by the International Council
of E-Commerce Consultants, or EC-Council. Instructors race through topics like
symmetric versus asymmetric encryption, hacker attack behaviors, and well-known
network weak points. If the students pass the test, they get the ultimate seal
of approval: Certified Ethical Hacker.
Another class, this one at Marshall University (WV), teaches
undergraduate and graduate students about security by familiarizing them with
common tools and strategies that hackers use. And while the class curriculum
d'esn’t specifically help students learn how to go about hacking a network,
Brian Morgan, assistant professor of Integrated Science and Technology, admits
that intelligent students can certainly “put two and two together”
from what they learn, and do just about anything they’d like. Morgan says
he put the class together after he had taken an ethical hacking course sponsored
by New Horizons Computer Learning Centers (www.newhorizons.com),
a company that sponsors a variety of distance education opportunities. Down
the road, he says, he hopes to develop a new course specifically geared toward
teaching students how to engage in ethical hacking of their own.
“This kind of knowledge will only help students once they go out into
the real world and get jobs in the IT departments of big companies that take
security a lot more seriously [than we do in academia],” Morgan says.
“In the long run, no matter what your personal opinions on the subject
might be, the ability to think like a hacker and strengthen your network as
a result of it, is a valuable skill to have.”