Friday, June 3, 2011

Cyberwar and Mary Poppins' Bag

Earlier this week, the Wall Street Journal led with the headline "Cyber Combat: Act of War".  The article describes an unpublished Pentagon strategy paper finding that a computer network attack can be treated as an act of war.  The article also says the military is leaning towards the concept of "equivalence" in determining whether an attack rises to the level of an act of war.  In short this means that if a cyber-attack[1] causes or attempts to cause damage equivalent to a military attack, then it would be treated as one.  This is a step past the Obama administration's "International Strategy for Cyberspace" document that states that the United States will use a "range of credible response options" to secure cyberspace.

If the reaction to the ISC paper is any indicator, people will immediately begin to lose their fragile little minds over this news.  The standard argument will revolve around attribution.  But the difficulty in identifying the attacker is a matter of execution, not a matter of policy.  Also, a president facing a major incident will use the military as an option if he thinks he needs to, no matter how the attack was delivered.  The more interesting question is whether the policy will be an effective deterrent to foreign aggression.

Probably the best public discussion of deterrence in the realm of network security comes from Richard Clarke in his book "Cyber War: The Next Threat to National Security and What to Do About It.  Clarke argues that "deterrence theory is probably the least transferable" of concepts from nuclear strategy.  In particular the "demonstration effect", the show of force that reminds adversaries of the horrible consequences of a nuclear exchange, is lacking a parallel in cyber warfare.  We all have images of mushroom clouds and shadows burnt into the pavements forever ingrained in our minds.  But the worst thing we've seen in the realm of network attacks is (for some value of worst) Stuxnet.  If the threat is "we'll use a cyber attack if you do", then our adversaries simply won't be deterred.

The lack of "scary" isn't the only challenge deterrence faces when you limit your options to cyberspace.  The incredible degree to which the United States relies on computers places it on an asymmetric footing with most other nations of the world. In some scenarios, there simply isn't an "in-kind" option available to the U.S. and it is probably safe to say that the U.S. will do almost anything to avoid an asymmetric conflict.  We just can't hurt some countries as much as some countries can hurt us.

The administration seems to have come to a conclusion similar to Clarke's:  Without the threat of a kinetic response, there is no chance that attackers will be given pause.  Only with the full "panoply of power"[2] available can we hope to deter our adversaries.  In the end this is what the American people will demand anyways.  It won't matter if twenty people were killed because of a bomb or because a train was intentionally misrouted, they'll be looking for the President to act and act strongly.

In my opinion, the policy statement is reasonable and inline with reality.  It deals with the extremes of what we think of as cyber attacks -- those causing substantive real-world damage.    The policy reflects the likely response of the United States to an incident that is on par with a military attack regardless of policy.  The policy explicitly lays out that all options will be on the table and ensures that all players understand that.  Finally, if you were to remove from the policy the fact that we're talking about computer based attacks, there isn't anything unusual in the policy:  Attack us and you can expect a response.

So...am I wrong?

[1] - Look, there is going to be a ton of cyber-this and cyber-that in this post, just deal with it.

[2] - "Panoply of power" is a phrase I first saw in Clarke's book.  It essentially means: "I'm going to open up the Mary Poppins bag and I can use anything I find in there."

Tuesday, May 31, 2011

Call to Arms

Last weekend, at the ripe old age of 37, I finally graduated from college.  This reasonably leads to two questions:
  1. Why did it take me 19 years to finish a 4 year school?
  2. Why did I bother after all this time?
Let's ignore the first, since it properly implies some emotional or mental failure on my part.  But, to address the second, the reason, and the only reason, that I finally finished school is that they wouldn't let me in a masters program without finishing my degree.

Why a masters?  I wanted to spend some time really digging in to the technical side of information security.  I wanted to sit and think about the problems I feel haven't been adequately addressed and I wanted to work through the technical areas I haven't had time to work through in my professional life.  I didn't want to waste time on risk management, policy, project management, psychology or ethics.  I wanted to go hardcore.  So I searched for programs that were highly technical in nature.  And I searched.  And I searched. And I searched.  And I found...two.

Now, I'm sure I missed some somewhere.  But if I wanted a masters in "Cybersecurity Policy" or, god forbid, "Homeland Security with information assurance focus" I have all the choices in the world.  If I want to manage the paper shuffle that drives an organization's information security, I'm set (available with a dual MBA option!!).  So, what is wrong with this?  What is wrong with this is my wife is mad at me.

She is a manager inside a government agency and she said something to the effect that in order to get promoted on the high-end of the scale you had to make an impact on a management level.  This triggered some level of agitation and a loss of self-preservation on my part.  I said, "You know what is easy to find?  Someone who wants to manage.  Do you know what is hard to find?  Tech people who really know what the hell they are doing."  So...couch time for me and a good shunning for my father, who dared to nod wisely at my faux pas.

But this is the core of the problem:  We're up to our eyeballs in risk analysis, risk informed policy, audits and people who want to manage because that is where the money is (or, worse, good tech people who now manage because that is where the money is).  There is, speaking on a nation-wide scale, a famine of hardcore technical security specialists who really know what the threat landscape is, what their tools can do and how best to react to incidents.  Even people who really want to go deep and be able to do more than update patches are, to a large degree, left to their own devices.  Far too few have taken on the challenge of going it alone.

Even well known educational institutions are having problems.  The university I've chosen, which you've probably heard of, has a class about reverse engineering and vulnerabilities.  I indicated that this was a class I'd be interested in.  I was told that "we haven't found anyone to teach that class yet".  Color me baffled, I guess it's the thought that counts.

Academia must do better.  Governments must do better.  Vendors must do better.  If we're really going to stand around and be terrified silly by cyberwarfare, APT, SCADA and cyberterroism, we all have to do better.  Because the other side doesn't have CISSPs, change controls, people who watch green dots turn to red dots or audits.  And they are kicking our ass.

Friday, May 6, 2011

The Economics of Security Failure

Rachel Maddow, who is on my list of really smart commentators who know how to respectfully disagree with people even when no one would blame her if she lost her mind, at one point had an excellent take on how America treats corporations.  She pointed out that, legally speaking, corporations are treated as people.  The Supreme Court, for example, has ruled that they enjoy the same protections under the Constitution that the rest of us enjoy.

But she goes on to say that this does not mean that we can assume that corporations will behave as we would expect humans to behave.  Ultimately they are driven by profit and therefore their decision making is largely driven by what makes the most economic sense for them.  This isn't wrong, this isn't bad, but we should be aware of it when we're discussing the role of corporations.  Think of it as a property of the famous W Edwards Deming quote:  "People with sharp enough targets will probably meet them, even if they have to destroy the company to do so."  If this is even marginally true, then what happens if you can meet your target, not destroy the company, but put millions of people at risk?

Within the context of security, which has always been a cost center, this question has largely been answered.  The latest example is Sony's PSN woes which include a loss of personal information and potentially credit card information of 100 million customers.  The testimony of Dr. Gene Spafford before the US House of Representatives' Subcommittee on Commerce, Manufacturing and Trade indicates that Sony knowingly ran unpatched Apache web servers.  He also says that they had no firewalls which is bad, but if the vulnerability was in the Apache web service is unlikely to have helped.

So that's point one, they made a bad decision.  Sony responded to the attack by sending emails to its customers notifying them that their personal information had been compromised.  They have also incurred the cost of hiring an incident response team, having to engage lawyers in butt covering (see email) and the shame of publicly apologizing and responding to congressional queries.

There is a cost to Sony.  But what we have here is a threat to 100 million customers and what, according to one expert, was a willful disregard for a known security issue.  Yet there appears to be no criminal statute to cover this, although there are already law suits seeking class action status.  There may be future costs that Sony will have to pay.

But the main question is this:  Does the cost, across the span of time that PSN has and will exist, of aggressively securing the PSN network against all known threats exceed the probable costs of all security incidents that will occur.  The answer, for Sony is probably yes.  It will cost more to secure the network than to simply shoulder the failures when they occur.  For a corporate entity, the decision is fairly easy.

Until organizations are aggressively penalized for failing to protect their customers, very little will change.  Too much data is being stored, that data is being kept for too long and companies are not being held truly accountable for their security stance.  I hope that 100 million people affected is enough to motivate governments, but I'm not holding my breath.

UPDATE:  Sony offers $1 million in identity theft protection

Thursday, February 4, 2010

The Sword, The Shield and You

So Ellen Nakashima of The Washington Post reported today that Google and the NSA are partnering to ward off cyber attacks. You can read the article here.

I had missed this story until Richard Bejtlich talked about it on his excellent TaoSecurity blog. I like talking to Richard, he has a view into problems through his real world job that I find valuable and he has contributed much to the security industry, so I tend to keep up with what he is thinking about. I also love when I get to disagree with smart people, because I like to fight outside of my weight class. In that vein, these statements from Richard stood out:


"I expect to see a lot of protest from people who have knee-jerk reactions to anything associated with NSA. However, the article notes that NSA is trying to help defend Google against advanced persistent threat, which benefits Google's users."

and


"NSA can change this perception it will help them better defend American national interests."
He's dead right on both counts. There will be a ton of negative reaction on this announcement and if the NSA could change their rep life would be easier for them. Not that that would be a good thing. The most notable phrase from the article was this:


"But sources with knowledge of the arrangement, speaking on the condition of anonymity, said the alliance is being designed to allow the two organizations to share critical information without violating Google's policies or laws that protect the privacy of Americans' online communications."

What this seems to mean is that Google would share information regarding the attacks it has encountered and the NSA would provide remediation advice, information on trends in other APT-style attacks and recommendations on future defensive postures. There is nothing more powerful than data, both on offense and defense, so clearly both entities would benefit.

The nervousness that many Americans would feel was that given the extreme secrecy under which the NSA operates and given the fact that Google is, most likely, the largest store of information on Internet traffic patterns on the planet, there is simply too much risk in providing a venue for increased cooperation between them. Outside of the United States, I'd be even more concerned. The source only said there would be protections for "Americans' online communications". This leaves the rest of the world unprotected and implies that an infrastructure either is or will be implemented where tap and trace capability would be ported to the Google database. (I would be seventeen flavors of shocked if that wasn't already in place)


But there is a problem with information in the public market. There is very little in the way of traditional market forces that would move organizations to share data on the attacks they were experiencing. I talked a little about this when Richard invited me to the SANS IDS What Works conference (do not miss that next year). There is a vast exchange of information and capability between attackers, yet there is too little in the way of active cooperation between organizations and companies that are likely to, or have been, the target of high-capability attackers. Without sharing, both information and capability, organizations stand alone against the threat. That leads to, as Harlan Carvey put it in his blog, "From the perspective of a historical military analogy, this appears to be akin to special operations forces attacking villages defended by farmers and shopkeepers."


So how do we do this? Now that we have realized that the threat that has been described by the research community for many years now is finally (well, we've finally noticed) at our doorstep, how do we harness the expertise of those who have been engaged in cyber warfare for years while still being comfortable that we aren't suffering from another dragnet surveillance program?


The NSA is one of the most powerful tools this country has to defend itself. Every day thousands of dedicated Americans walk across the parking lot on Fort Meade and enter a world of threats we don't see. Every day they bring us "silent victories" that we'll never hear of. There is heroism, sacrifice and dedication both in Maryland and around the world. Very few of us have any idea of how much we need them.


But today is a different world from when the information assurance role was given to the NSA (1981ish). In 1981, very few people understood what could be done and most of that data was generated by the NSA as part of their offensive capability. It only made sense to have those few who were actually versed in the threat and the capability to provide guidance and assistance to organizations critical to America. But today, there is simply too much of a threat with combining access to the information store that Google has with the secrecy and power of the NSA.


Remember, Google is run by the man who said “If you have something you don’t want anyone to know, maybe you shouldn’t be doing it”. So we have that attitude, an agency tasked with one of the most complicated, difficult missions on the planet and a store of information of incalculable data. Why would we worry?


So let’s split that defensive capability from the NSA. The Department of Homeland Security (yeah, I know) has the National Cyber Security Division of the Office of Cyber Security & Communications. This is the organization that is tasked with defending this country. This would provide at least some distance between the sword and the shield.


TL;DR Version:
  1. We all need to talk more.
  2. NSA is awesome but scary.
  3. Google is awesome but scary.
  4. Let’s not do this.