Is network offense the best network defense?

Joseph Menn has a good Reuters article on a growing sentiment within network security circles:

Frustrated by their inability to stop sophisticated hacking attacks or use the law to punish their assailants, an increasing number of U.S. companies are taking retaliatory action.

Known in the cyber security industry as “active defense” or “strike-back” technology, the reprisals range from modest steps to distract and delay a hacker to more controversial measures. Security experts say they even know of some cases where companies have taken action that could violate laws in the United States or other countries, such as hiring contractors to hack the assailant’s own systems.

Exactly. It’s clear that current defenses have failed against a cadre of state-sponsored attackers whose tactice have been pretty standard for years –- targeted spear-phishing, uploaded remote access tools, privilege escalation, and network ownership, with redundant backdoors to make the access permanent. And it’s only a matter of time before crooks and quasi-politicized griefers succeed in the same game.

Defenders are reduced to reacting after the intruders have broken in and started stealing data. Defenders are mostly failing at that, too.  And if they do succeed, it’s because they’ve hired experts to carry out an expensive, specialized intelligence-gathering campaign, followed by a spasm of action over the weekend in the hopes of nailing every backdoor shut all at once.  Meanwhile, network owners are, or ought to be, desperately searching for the online equivalent of a safe room, where their most important secrets can be held while intruders ransack the rest of the house.

But maintaining an air-gapped safe room requires more discipline than most institutions can muster. So that strategy is likely to fail as well.

Government, meanwhile, is nearly useless to corporate victims.  Report your intrusion to the FBI or DHS, and the response will be familiar, at least if you’ve ever reported a stolen bike to a big-city police department.  If you’re lucky, you’ll get a little sympathy and some advice on the best kind of lock to buy with your next bike.  Remarkably, our government now treats the systematic looting of America’s corporate know-how pretty much the same way.  They don’t have time to chase the guys in your network.  They’re too busy chasing the guys who stole stealth technology from their networks.

No wonder companies whose every secret is being stolen aren’t satisfied with the standard defense measures. As Menn’s article suggests, the result is a growing sense that companies need to do more on their own to solve this problem.  As the article suggests, there are at least three ideas now in play.

First, installing honeypots, fake networks and fake documents to slow the attackers down,  leave them confused, and perhaps provide the defenders an early warning that the outer walls have been breached.

Second, building “beacons” into your documents, so when they’re stolen and opened by the attackers, the documents phone home, telling you not only that you’ve been compromised, but maybe something about the guys who did it.  I call that a “digital dyepack” — after the exploding pack of hundred-dollar-bills that bank robbers have learned to fear.  (Of course, it’s possible to go further, to load your documents with malware that will not just phone home but will seriously harm whatever network it’s opened on. But that’s not a popular idea, for the same reason that banks don’t use fragmentation bombs in place of dye; the risk of hurting bystanders is just too great.)

Third, companies are talking about hacking back –- using the information provided by beacons and other intelligence to break into our attackers’ own networks to gather evidence about the attackers identity.  This too has a lot of appeal.  We’ll never deter such attacks until we deter the attackers, which means identifying them and then doing things that hurt them.  It might mean exposing the governments that sponsor them.  It might mean civil or criminal suits against the hackers, their governments, or the companies to which they give the stolen data.  It might mean imposing sanctions on complicit governments or on any of their exports that facilitate such espionage.  But none of that can happen without pretty good evidence about who the attackers are and who’s sponsoring them, evidence that is likely to require intrusion into the hackers’ networks.

Here’s the problem.  A generation of computer crime lawyers at the Justice Department has devoted their careers to discouraging the reaction that Menn describes. That’s because the fundamental law in this area, the law they’ve been writing and rewriting for the last twenty-five years, known as the Computer Fraud and Abuse Act, makes it a felony to do pretty much anything to a computer on the Internet “without authorization.” That include doing things to a hacker’s computer without his authorization. A graduate of Justice’s computer crime section once shut down a discussion on this topic by saying, “What if you followed the hacker back to a hospital network, and in trying to catch him you shut down computers in the intensive care unit?  That’s a felony murder rap.”

Still, even under the Computer Crime and Abuse Act, there’s a difference among the tools the Menn article discusses.

Honeypots, fake networks, and the like present no real problem, it seems to me.  They’re on your network, and you can authorize yourself to do what you like there.

“Beaconed” documents are a little trickier, since they phone home from someone else’s network. In my view, though, there are a lot of ways to design beacons without incurring liability.  To take one obvious example, you could install web bugs or “clear gifs” in your documents; these buse standard HTML code to insert web-based content into documents.  Since the content must be pulled from a web server, the server can record every time the document is opened.  Email “receipt” services use this technique to tell you when your message has been opened by the recipient.  Given its commonplace incorporation into web standards, a beacon like this should be seen as having been “authorized” by anyone who opens the document.  (Whether this technique will work is a different question; I suspect most state-sponsored attackers are too savvy to be caught quite so easily.)

And what about going to the belly of the beast — compromising the networks of our attackers?  It’s an intriguing idea from a policy point of view.  If our security sucks, it’s safe to assume that the bad guys’ security does too.  If, as seems true today, the offense always wins in cyberintrusion battles, then we won’t win until we take the offense. And our overwhelmed law enforcement and intelligence agencies have no offensive capacity to spare for corporate victims. So why not unleash private resources, under proper supervision?

That last phrase is important. We don’t need a bunch of vigilantes hacking cyberthieves on their own say so.  Just imagine how the MPAA would define “cyberthief” and you’ll understand the problem.  But there are a wide range of options for authorizing victims to take action without creating vigilantes.  Everyone who’s seen a Western knows the difference between the scene where the sheriff faces a bunch of citizens with a noose and the one where he calls on a bunch of citizens to form a posse. And anyone who’s studied Sir Francis Drake’s career knows that, while pirates were hanged, privateers were knighted.

The Computer Fraud and Abuse Act has been on the books more than 25 years.  It’s been revised a dozen times by the prosecutors who love it.  And all that time we’ve seen a massive explosion in computer insecurity and crime.  So it’s not like the law as written is exactly working. If the law as written prevents even responsible efforts by companies to catch the people who are robbing them blind, maybe it’s the law that needs to be reconsidered.

 

Powered by WordPress. Designed by Woo Themes