pageok
pageok
pageok
[Paul Ohm (guest-blogging), April 11, 2007 at 1:09pm] Trackbacks
The Myth of the Superuser, Part Three, The Failure of Expertise:

Over the past two days of discussion about my article, I have essentially been saying that we (policymakers, lawyers, law professors, computer security experts) do a lousy job calculating the risks posed by Superusers. This sounds a lot like what is said elsewhere, for example involving the risks of global warming, the safety of nuclear power plants, or the dangers of genetically modified foods. But there is a significant, important difference: researchers who study these other risks rigorously analyze data. In fact, their focus on numbers and probabilities and the average person's seeming disregard for statistics is a central mystery pursued by many legal scholars who study risk, such as Cass Sunstein in his book, Laws of Fear.

In stark contrast, experts in the field of computer crime and computer security are seemingly uninterested in probabilities. Computer experts rarely assess a risk of online harm as anything but, "significant," and they almost never compare different categories of harm for relative risk. Why do these experts seem so willing to abdicate the important risk-calculating role played by their counterparts in other fields? Consider four explanations:

1. Pervasive Secrecy. Online risks are shrouded in secrecy. Software developers use trade secrecy laws and compiled code to keep details from the public. Computer hackers dwell in a shadowy underground. Security consultants are bound contractually not to reveal the identities of those who hire them. Law enforcement agencies refuse to divulge statistics about the number, type, and extent of their investigations and resist Congressional attempts to increase public reporting.

Which brings us to California SB 1386. Inspired by experiences with this law, Adam Shostack argued at this year's Shmoocon that "Security Breaches are Good for You," by which he really meant, "breach disclosure is good for you," setting off a mini-debate in a couple of blogs. (See this post and work backwards from there). On his blog, Adam said:

The reason that breaches are so important is is that they provide us with an objective and hard to manipulate data set which we can use to look at the world. It's a basis for evidence in computer security. Breaches offer a unique and new opportunity to study what really goes wrong. They allow us to move beyond purely qualitative arguments about how bad things are, or why they are bad, and add quantifatication.

I think Adam is on to something, and this quote echoes some of my conclusions in the article. But I'm not hitching my argument directly to his. Because even if you conclude that Adam is wrong; if you think the need for secrecy and non-disclosure trumps his desire for a more scientific approach to computer security, secrecy still shouldn't trump accurate, informed policymaking (lawmaking, judging). What does this mean? If someone wants to keep the details behind a particular risk secret, for whatever reason, perhaps that's his prerogative. But if he then complains to policymakers about vague, anecdotal, shrouded risks, he should be ignored or at least his opinion should be greatly discounted.

2. Everyone is an Expert. "Computer expert" is a title too easily obtained. Unlike modern medical science, where the signal advances require money and years of formal education to achieve, many computer breakthroughs tend to come from self-taught tinkerers. In many ways, the democratizing nature of online expertise is cause for celebration; it is part of what makes Internet innovation and entrepreneurship so exciting.

The problem is that so-called computer experts tend to have neither the training nor inclination to approach problems statistically and empirically. People can be called before Congress to testify about identity theft or network security, even if they have no idea nor even care how often these risks occur. Their presence on a speakers' list crowds out the few who are thinking about these things empirically and robustly.

3. Self-Interest. Many experts have a self-interest in portraying online actors as sophisticated hackers capable of awesome power. Law enforcement officials spin yarns about legions of expert hackers to gain new criminal laws, surveillance powers, and resources. The media enjoy high ratings and ad revenue reporting on online risks. Security vendors will sell more units in a world of unbridled power.

4. The Need for Interdisciplinary Work. Finally, too many experts consider online risk assessment to be somebody else's concern. Computer security experts often conclude simply that all computer software is flawed, and that malicious attackers can and will exploit those flaws if they are sufficiently motivated. The question isn't a technology question at all, they contend, but it is about means, motive, and opportunity, which are questions for criminologists, not engineers.

Criminologists, for their part, spend little time studying computer crime, perhaps assuming that vulnerability-exploit models can only be analyzed using computer science. The answer, of course, is that they're both wrong -- and both right. Assessing an online risk requires an interdisciplinary blend of computer science, psychology and sociology; short-sighted analyses that focus only on some of these disciplines often result in misanalysis.

One Prescription: Better Data. I won't spend too much time summarizing my prescriptions. The gist is that we need to start to police our rhetoric, and we need to do a better job collecting and using data. Two sources of data seem especially promising: the studies coming out of the burgeoning Economics of Information Security discipline, and the ongoing National Computer Security Survey co-sponsored by DOJ's Bureau of Justice Statistics and DHS's National Cyber Security Division and administered by RAND.

There is much more to my arguments and prescriptions, but I hope this is a good sample. Tomorrow, I will transition to something very different: a two-day look at a paper I have co-authored describing some empirical results about the Analog Hole and about consumer willingness-to-pay for digital music.

Related Posts (on one page):

  1. The Myth of the Superuser, Part Three, The Failure of Expertise:
  2. The Myth of the Superuser, Part Two, Harm:
  3. The Myth of the Superuser, Part One:
Max Hailperin (mail) (www):
I would make a different, but related and synergistic, point regarding risk assessment. Namely, even if we don't have the data to estimate probabilities, we still benefit from the analytic framework of risk, because it reminds us of the importance of keeping the different risk abatement techniques in balance, rather than overinvesting in any one.

I made this point in the introductory part of the security chapter of my textbook on operating systems principles. Others have made it as well. My audience, of course, is in the community of computer technologists, so the alternative of persuading Congress to pass a new law isn't one of the risk abatement techniques they typically consider. Instead their predisposition is almost always to "harden the target," by making some system less vulnerable to attack. But that is far from the only option, and is in fact often one that is subject to overinvestment, as can be realized just through qualitative consideration, even without the data for a more quantitative approach. I suspect that the same holds true for the approach of legislation.

Risk stems from three factors:
(1) the probability of an adversary choosing to attack
(2) the probability that an attempted attack will succeed
(3) the amount of damage that a succesful attack will do.

The mistake I see technologists making is focusing too much on factor (2) and ignoring (1) and (3). Some hardening of targets is surely necessary, but after a point, the incremental gain from yet further vulnerability reduction is less than from putting some investment into such matters as not creating disgruntled employees and reducing the amount that an adversary stands to gain from a succesful attack (both affecting factor 1) or reducing reliance on any one computer system (reducing factor 3). So, having spent my effort persuading my students to not overlook 1 and 3, I read Ohm as saying that when we go beyond the technical community, we also need to recognize that within 1, we also may be overinvesting in some subcategories. We are focusing too much on causing potential attackers to choose not to attack through threatening them with ever greater legal risk, rather than focusing enough on other ways of causing them not to attack (like not creating disgruntled employees, again) or for that matter categories 2 and 3.
4.11.2007 2:33pm
Fub:
Paul Ohm (guest-blogging), April 11, 2007 at 1:09pm wrote:
One Prescription: Better Data. I won't spend too much time summarizing my prescriptions. The gist is that we need to start to police our rhetoric, ...
Amen! But good luck convincing interested parties both inside and outside government to cool the overblown rhetoric that they believe (probably correctly) brings home their bacon. I'd make an analogy to drug policy where some political and social elements are common to computer crime policy.

1. Most people don't know from shinola about drugs, or computers.

2. Most people also have completely irrational fears, as well as completely irrational expectations, of both.

3. What gain is there for self-serving politico or bureaucrat to "cool the rhetoric", or even use honest empirical analysis to formulate policy in such a political tinderbox? There isn't any. The dog who gets the gravy is the one that barks the loudest and frightens the ignorant and gullible into believing he's protecting them from some grave danger.
... and we need to do a better job collecting and using data. Two sources of data seem especially promising: the studies coming out of the burgeoning Economics of Information Security discipline, and the ongoing National Computer Security Survey co-sponsored by DOJ's Bureau of Justice Statistics and DHS's National Cyber Security Division and administered by RAND.
Again, the drug policy analogy might shed some light. Under current law and policy, honest research on any illegal drug is almost impossible to conduct. No research can even be conducted without NIDA's permission (and anybody who believes NIDA is honest probably also believes in the tooth fairy).

Similarly, for just one example, no honest research can be done on any currently used DRM scheme without permission of the copyright owner, or the courts, or the DOJ (by stating they will not prosecute the researcher for DMCA violations). Those exceptions rarely or never happen. So honest researchers are always at risk of suit or criminal prosecution. The same interests who profit from egregious hype about computer and media security are those who are in the position to permit or deny honest research.

I think that the situation in each is a variant of regulatory capture. The variant is that those who have captured the regulatory institutions are not those who are regulated, but those who build empires by finding more things to regulate. For example of how egregious the law has become, theoretically the DOJ could prosecute anybody who decrypts this, for a DMCA violation: Guvf zrffntr vf vagraqrq gb or rapelcgrq naq abg gb or ernq ol gur trareny choyvp. They probably won't, but they certainly could tie somebody up in court for a long time if they chose to. One problem is that there is no functioning "giggle test" among courts for computer crime prosecutions, as clearly evidenced by the Bret McDaniel case cited previously in your discussions.

When the risks to the honest are so high, the charlatans and wannabe tyrants have won. Whether the victory is temporary or permanent, I don't know. Good luck to you in your attempts to reform the current injustices.
4.11.2007 3:12pm
Aultimer:
Good data won't happen. There's no reasonable way to connect a disclosure or incident to actual harm. Threat coordination efforts (e.g. CERT and FS-ISAC) are still the best way to reduce actual harm. Disclosure laws are typical goverment "gotta do something" efforts. What real benefit does a data subject have from a mandatory disclosure of a breach?
4.11.2007 4:05pm
Gerg:
I have a different take on why probabilities are pointless in computer security. Think of what you do with a probability -- you multiply it by the cost of the consequences and take that to be the expected value of the risk that you're exposed to.

But the unique aspect of computer security that is different from "real-world" risks is that virtually every exposure, no matter how exotic or difficult to exploit can then quickly be automated, publicized, and leveraged to turn it into a major breach.

So for example we can calculate the probability that burglar will be able to pick my front door lock and multiply by the value of my property and see what the expected cost of the risk of burglary is for my apartment. If it will take hours to pick the lock they'll just break the door down, and if it costs hundreds of dollars to eliminate the last 0.01% probability that someone will break in that way it just isn't worth it to me, the exposure is less than that cost anyways.

But if there's even a 0.00001% chance an attack will succeed against my broadband router the script kiddies will quickly be passing around exploits to automate that attack and all vulnerable routers will quickly find themselves regularly attacked.

Essentially computer security due to the automation and the world-wide reach of the internet is an all-or-nothing thing. Either your system is secure and it repulses the thousands of attacks per day that it receivees. Or it isn't and it will inevitably be compromised and your only reliable way to recover is to start from scratch.

Anytime you see someone waxing poetic about "security in depth" and "hardening" it's a sure sign they don't actually kow what they're talking about and are just parroting principles taught to novice sysadmins to try to keep them out of trouble.
4.11.2007 4:27pm
Fub:
Aultimer wrote:
Disclosure laws are typical goverment "gotta do something" efforts. What real benefit does a data subject have from a mandatory disclosure of a breach?
For one thing, they could cancel all their credit cards, notify all financial institutions they deal with, close accounts if necessary, etc. A breech just means that somebody unauthorized has the keys to various individuals' barns. It's not unlike a large number off people all losing their credit cards at the same time. Sometimes with adequate notice it is possible for individuals to close their individual barn doors before the thieves take all the horses in the neighborhood.
4.11.2007 4:32pm
logicnazi (mail) (www):
Simply getting experts to testify about probabilities and more research into the actual risks simply won't be enough. I mean consider the situation with illicit drugs. Despite boatloads of data both on the risks of the substances themselves and social data about the effects they have on society we have nothing like a rational policy on drugs. The evidence about MJ simply isn't enough because people are afraid of it and are more likely to vote on their fear than dry statistical argument.

Until the public isn't ignorant and hence terrified about the possibilities of computer crime no amount of testimony from experts will fix the problem since there will be always someone spinning a self-interested scare story. It is only when common sense lets people dismiss the emotional impact of these fear stories that this problem will be fixed.
4.11.2007 4:50pm
William Oliver (mail) (www):
"The problem is that so-called computer experts tend to have neither the training nor inclination to approach problems statistically and empirically..."

"Criminologists, for their part, spend little time studying computer crime, perhaps assuming that vulnerability-exploit models can only be analyzed using computer science...."

Criminologists may not spend much time at this, but criminalists do. And they are experts, not "so-called" experts. Last year, the American Academy of Forensic Sciences established a new Section (a recognition of a major discipline) devoted to digital evidence. Further, there are standards for the investigation and testimony on digital evidence that you seem to be ignoring. I would suggest you look at the standards promulgated by ASCLD-LAB (American Society of Criminal Laboratory Directors) and the SWGs (Scientific Working Groups) supported by the DoJ -- in particular SWGDE (Digital Evidence) and SWGIT (Imaging Technologies). The National Center for Forensic Sciences in association with professional organizations such as the IAI (International Association for Identification) is developing certification guidelines. Thus, there are in existence today accredation standards and best practices guidelines generally accepted within the community. While individual certification is a bit fragmented today, which one should expect with a relatively new discipline, in the near future there will be generally accepted certification standards. These should be acknowledged in any article dealing with this kind of digital evidence. Any broad brush statements about "so-called" experts that ignores it is misleading.
4.12.2007 11:55am
Balt (mail):
Fub wrote:

For one thing, they could cancel all their credit cards, notify all financial institutions they deal with, close accounts if necessary, etc. A breech just means that somebody unauthorized has the keys to various individuals' barns.


Maybe that's what it means, and maybe not. Maybe it means that someone has the keys to various individuals' hope chests, but everyone is spending money replacing their barn locks. Or they may be replacing all their locks, "just in case." Or maybe they'll just burn the barn down (i.e., close their accounts).

What if the local barn inspector loses his key ring but then a nice neighbor finds it and returns it? New locks for everyone?

Many of the notice laws want to treat these matters as absolutes, when they are anything but.
4.12.2007 12:34pm
DevonMcC (mail):
As evidence against the danger of "SuperHacker", consider that a couple of books about major incursions depict the perpetrators as rather average but with too much time on their hands. In fact, the criminal in the book "At Large: the Strange Case of the World's Biggest Internet Invasion" (by Charles Mann) is depicted as possibly below-average in intelligence. What he lacked in intelligence, though, he compensated for with obsessive single-mindedness.

Clifford Stoll's book "Cuckoo's Egg" also does not paint a picture of the hackers as particularly accomplished. They exploited a few well-known vulnerabilities on a lot of poorly-maintained systems.
4.12.2007 1:44pm