Tuesday, May 31, 2011

Call to Arms

Last weekend, at the ripe old age of 37, I finally graduated from college.  This reasonably leads to two questions:
  1. Why did it take me 19 years to finish a 4 year school?
  2. Why did I bother after all this time?
Let's ignore the first, since it properly implies some emotional or mental failure on my part.  But, to address the second, the reason, and the only reason, that I finally finished school is that they wouldn't let me in a masters program without finishing my degree.

Why a masters?  I wanted to spend some time really digging in to the technical side of information security.  I wanted to sit and think about the problems I feel haven't been adequately addressed and I wanted to work through the technical areas I haven't had time to work through in my professional life.  I didn't want to waste time on risk management, policy, project management, psychology or ethics.  I wanted to go hardcore.  So I searched for programs that were highly technical in nature.  And I searched.  And I searched. And I searched.  And I found...two.

Now, I'm sure I missed some somewhere.  But if I wanted a masters in "Cybersecurity Policy" or, god forbid, "Homeland Security with information assurance focus" I have all the choices in the world.  If I want to manage the paper shuffle that drives an organization's information security, I'm set (available with a dual MBA option!!).  So, what is wrong with this?  What is wrong with this is my wife is mad at me.

She is a manager inside a government agency and she said something to the effect that in order to get promoted on the high-end of the scale you had to make an impact on a management level.  This triggered some level of agitation and a loss of self-preservation on my part.  I said, "You know what is easy to find?  Someone who wants to manage.  Do you know what is hard to find?  Tech people who really know what the hell they are doing."  So...couch time for me and a good shunning for my father, who dared to nod wisely at my faux pas.

But this is the core of the problem:  We're up to our eyeballs in risk analysis, risk informed policy, audits and people who want to manage because that is where the money is (or, worse, good tech people who now manage because that is where the money is).  There is, speaking on a nation-wide scale, a famine of hardcore technical security specialists who really know what the threat landscape is, what their tools can do and how best to react to incidents.  Even people who really want to go deep and be able to do more than update patches are, to a large degree, left to their own devices.  Far too few have taken on the challenge of going it alone.

Even well known educational institutions are having problems.  The university I've chosen, which you've probably heard of, has a class about reverse engineering and vulnerabilities.  I indicated that this was a class I'd be interested in.  I was told that "we haven't found anyone to teach that class yet".  Color me baffled, I guess it's the thought that counts.

Academia must do better.  Governments must do better.  Vendors must do better.  If we're really going to stand around and be terrified silly by cyberwarfare, APT, SCADA and cyberterroism, we all have to do better.  Because the other side doesn't have CISSPs, change controls, people who watch green dots turn to red dots or audits.  And they are kicking our ass.

Friday, May 6, 2011

The Economics of Security Failure

Rachel Maddow, who is on my list of really smart commentators who know how to respectfully disagree with people even when no one would blame her if she lost her mind, at one point had an excellent take on how America treats corporations.  She pointed out that, legally speaking, corporations are treated as people.  The Supreme Court, for example, has ruled that they enjoy the same protections under the Constitution that the rest of us enjoy.

But she goes on to say that this does not mean that we can assume that corporations will behave as we would expect humans to behave.  Ultimately they are driven by profit and therefore their decision making is largely driven by what makes the most economic sense for them.  This isn't wrong, this isn't bad, but we should be aware of it when we're discussing the role of corporations.  Think of it as a property of the famous W Edwards Deming quote:  "People with sharp enough targets will probably meet them, even if they have to destroy the company to do so."  If this is even marginally true, then what happens if you can meet your target, not destroy the company, but put millions of people at risk?

Within the context of security, which has always been a cost center, this question has largely been answered.  The latest example is Sony's PSN woes which include a loss of personal information and potentially credit card information of 100 million customers.  The testimony of Dr. Gene Spafford before the US House of Representatives' Subcommittee on Commerce, Manufacturing and Trade indicates that Sony knowingly ran unpatched Apache web servers.  He also says that they had no firewalls which is bad, but if the vulnerability was in the Apache web service is unlikely to have helped.

So that's point one, they made a bad decision.  Sony responded to the attack by sending emails to its customers notifying them that their personal information had been compromised.  They have also incurred the cost of hiring an incident response team, having to engage lawyers in butt covering (see email) and the shame of publicly apologizing and responding to congressional queries.

There is a cost to Sony.  But what we have here is a threat to 100 million customers and what, according to one expert, was a willful disregard for a known security issue.  Yet there appears to be no criminal statute to cover this, although there are already law suits seeking class action status.  There may be future costs that Sony will have to pay.

But the main question is this:  Does the cost, across the span of time that PSN has and will exist, of aggressively securing the PSN network against all known threats exceed the probable costs of all security incidents that will occur.  The answer, for Sony is probably yes.  It will cost more to secure the network than to simply shoulder the failures when they occur.  For a corporate entity, the decision is fairly easy.

Until organizations are aggressively penalized for failing to protect their customers, very little will change.  Too much data is being stored, that data is being kept for too long and companies are not being held truly accountable for their security stance.  I hope that 100 million people affected is enough to motivate governments, but I'm not holding my breath.

UPDATE:  Sony offers $1 million in identity theft protection