Open Menu Close Menu

The Myth and Reality of Risk

Just when my retirement investments were falling like a lead bricks the EPA informed me last month that I'm not worth as much as I was five years ago. While some of my skiing friends may heartily agree, that wasn't the basis for the EPA's judgment. They have determined that the "value of a statistical life" is $6.9 million--a drop of more than a million dollars in the last five years.

Government agencies do the same kind of quantitative risk analysis that we do in the security arena; risk is a combination of the probability of event and the undesirable consequences of the event. In the case of the EPA they were comparing the cost of implementing tighter pollution regulations versus the value of reducing pollution, in this case the number of lives saved times the value of each life, $6.9 million.

The details of these calculations can have important policy implications. For example, if a proposed regulation will cost an industry $15 billion to implement but will save 2,000 lives, the cost ($15 billion) outweighs the benefits ($13.8 billion) if we use the new "value of a statistical life." But if we use the old value, $7.8 million, the benefits ($15.6 billion) outweigh the costs.

Where did the $6.9 million value come from, and why did it change? The EPA figure didn't come from an individuals estimated earnings or societal value. (My spouse has assured me that I am worth more than $6.9 million.) It was based on what people are willing to pay to avoid risk as measured by how much extra employers pay workers to do more risky jobs. The actual value was the result combining two studies: one that came up with a value of $8.9 million and the other between $2 million and $3.3 million.

The variance in the two studies resulted from subtle differences between comparing risky jobs and comparing risky industries. I don't know about you, but I don't have warm and fuzzy feeling about the methodology or the fact that the answers varied by a factor of four. Neither did some of the members of the EPA's Science Advisory Board. According to Granger Morgan, chair of the Board and engineering and public policy professor at Carnegie Mellon University, "This sort of number-crunching is basically numerology."

Risk Management
That got me thinking about how we quantify risk as we do IT security. When we get to the final cost-benefit ratio everything may seem logical, but does it make sense, or is it just numerology?

The traditional view of risk management plots the probability of an event from very low to very high on one axis, say the vertical, and the impact of that event from very low to very high on the horizontal axis. The first priority is to deal with risks in the upper right of the graph, the ones with a combination of high impact and high probability. The ones in the lower left, low impact and low probability, get the lowest priority.

It's all so logical. What can go wrong? As I see it there are three places we can go astray:

1. The accuracy of the quantitative model;
2. The quality of the data used in the model; and
3. The fact that people are not always rational.

Accuracy of the Model
Any quantitative model involves assumptions that dramatically impact the results. In 2006 the EPA released new regulations on mercury emissions from coal-burning plants based on estimates that reducing emissions more aggressively would cost the coal industry $750 million a year while only benefiting public health by $50 million per year. But a cost-benefit study by Harvard's Center for Risk Analysis, ironically funded by the EPA, concluded that the $750 million of expenditure by the coal industry would result in a public health savings of $5 billion dollars per year. It turns out that the EPA's analysis focused on the effects of reducing mercury levels in freshwater fish while the Harvard analysis included ocean fish such as tuna.

Quality of the Data
Those of us in IT know how hard it is to get good data. In the 1970s the Occupational Safety and Health Administration (OSHA) was considering the cost of an 85-dB noise standard. One defense contractor estimated the cost to be $31.6 billion; another estimated the cost at $11.7 billion. The difference was the technology used to reduce noise. In risk analysis the problem is further compounded by the fact that some things are hard to quantify. The EPA's "value of a statistical life" is a good example.

People Are Not Always Rational
Finally, people aren't particularly rational about risk. Shortly after the shootings at Virginia Tech, I wrote in this column:
"Ironically, FBI statistics show that the murder and non-negligent manslaughter rate in the United States has been steadily falling since 1993, from 9.5 per 100,000 people in 1993 to 5.6 in 2005. College and universities are even safer. In 2005 there were 5 murders and non-negligent manslaughters on campus out of a population 6.3 million students. The resulting 0.08 per 100,000 students is less than 2% of the national average....Stated differently, even if an event like the one at Virginia Tech were to happen every year, a student is far more likely to be murdered while home on summer vacation than on campus during the academic year."
And how much traction did that argument have with concerned parents? Were our subsequent activities to develop prevention and response plans based on a structured cost-benefit analysis? No. They were based on the concerns of students and their parents.

Common Sense?
I am an avid spreadsheet user; at last count there are more than 1,865 of them on my computer. But reality is sometimes summarized by that often-quoted line, "Life is not a spreadsheet." Reducing a complex problem to single number or ratio is not easy or necessarily good policy.

On the other hand, we can't just fly by the seat-of-pants and rely on "common sense" either. The real world is complicated and common sense is frequently wrong as we seldom know all of the relevant details or understand the complex interactions and the human side of the equation. And sometimes, common sense can be shown to be mathematically flat out wrong. The "Monty Hall Problem," a sophisticated statistical concept named after the old TV show "Let's Make a Deal" hosted by Monty Hall, is a famous illustration that common sense is sometimes nonsense.

Simplistic analysis frequently falls short, complex analysis is just that, and common sense isn't always up to the task. This leaves us with a need for quantitative analysis, tempered with a healthy dose of common sense, and a clear understanding that "common-sense" is sometimes nonsense. Sounds a lot like a committee with a couple of vocal skeptics.
comments powered by Disqus