Monday, August 13, 2007

Why Making Software Companies Liable Will Not Improve Security

(((This was originally posted to my old blog on January 28th 2007. However my host Blogthing went down almost immediately thereafter, so as far as I know almost nobody saw it. I'm now reposting it in response to the recent House of Lords report on e-crime. Bruce Schneier has commented approvingly on the report, including its recommendations for liability for security defects in software. So I think that now is a good time to repost this, including to Reddit.)))

----------------------------------

Bruce Schneier has written that the way to improve the current lamentable state of computer security is to impose mandatory liability on vendors for security breaches. I disagree: I think that this would have little positive impact on security, but a lot of negative impacts on the software industry generally, including higher prices, increased barriers to entry, and general reduced competition.

This is a bit worrying: Schneier is a remarkably clever guy who understands software, security and (at least a bit of) economics. His understanding may well exceed mine in all three areas . I’m used to nodding in agreement whenever I read his words, so to find myself shaking my head was a strange experience. Hence this blog post: whether I turn out to be right or wrong, I will at least have scratched the itch.

So, to the argument:

On the face of it, Schneier’s argument is impecable economics. Security is an “externality”, meaning that all of us pay the price for bad security, but only the software vendor pays the price for good security. In theory we should demand secure software and pay higher prices to get it, but in practice most people cannot accurately evaluate the security of a software system or make an intelligent trade-off about it. So system vendors (including, but not limited to, Microsoft) find it is more cost effective to trumpet their wonderful security systems while actually doing as little as possible. Security is more of a PR issue than a technical issue.

So the Economists Solution is to push the costs of bad security back on to the vendor, where they rightfully belong. Therefore you and I should be able to sue someone who sells us software with security defects. That way if we suffer from virus infections, spambots or phishing because some software somewhere was defective, we should be able to sue the manufacturer, just as I could sue a car manufacturer if I am hurt in a crash due to defective brakes.

So far, so fine. But now imagine you are a small software company, such as the one run by Joel Spolsky. You sell a product, and you are making a reasonable living. Then one day a process server hands you a writ alledging that a cracker or crackers unknown found a defect in your software, and used it to cause series of security breaches for many of your customers, followed by downtime, loss of business, theft from bank accounts, and a total of a million dollars of damages. It could easily put you out of business. Even if the claim is slim and the damages inflated, you could still have a big legal bill.

Obviously this is not useful: the point is to encourage the vendors to do better, and while hanging the worst one occasionally may encourage the others, doing so by a lottery won’t.

Part of the problem is that the economic logic calls for unlimited liability. So it doesn’t matter whether you sold the software for $5 or $5,000, you are still on the hook for all damages due to security defects. Of course the law could move to a limited liability model, capping it at, say, the price of the software, but that is still too big for most companies to pay out. Even if it was 10% of the price of the software, Microsoft is probably the only company with a big enough cash pile to survive an incident that hit 50% of its users. But 10% of the price of a piece of software is going to be only a very tiny fraction of the real cost of a security incident. It looks a lot more like a fine than real liability.

So if you are a software vendor then how do you protect yourself from such an incident? Of course you can tighten up your act, which is the whole point. But no amount of checking and careful architecture is going to protect you from the occasional defect that blows everything wide open.

You could buy insurance. Lots of professions have to carry liability insurance: its just a cost of doing business. Insurers will want to see you take proper steps to avoid claims, but will then cover you for the ones that do happen. Or, more or less equivalently, there could be a “safe harbour” clause in the liability law. If you can show that you have taken all proper steps to ensure the security of your system then its just bad luck for the customer and you are not liable.

The trouble with both of these solutions is that we do not have any way of deciding what the “proper steps” to either maintain your insurance cover or stay in the safe harbour are. There are lots of good practices which are generally thought to improve security, but they actually depend more on motivated people than anything else. From the developers point of view, the need to develop secure software is replaced by the need to reach the safe harbour. If the approved practices say you do X then you do it, and ensure that a written record exists to prove that you did it. Whether doing X actually improves the security of your product is irrelevant.

I’ve seen this effect personally. For a while I worked in an industry where software defects *do* give rise to unlimited liability, and where the government inspectors check that you are following the approved process. The industry was medical devices, and the government department was the FDA. The entire focus was on the paper trail, and I mean paper: signed and dated pieces of paper were all that counted unless you could prove that your CM system was secure according to yet another complicated and onerous set of rules (which we couldn’t). Worse yet, the inspectors wanted to see that you worked from the same records they were auditing, so you couldn’t even keep the training records on a database to find out who still needed what training: it was the signature sheet or nothing. In theory we weren’t even allowed to do software development on a computer, although in practice the inspectors merely noted that we were not in compliance on that point.

The impact of these rules on everyday work was huge and often counterproductive. For instance, it might have been a good idea to run lint or a similar tool on our C code. But this could never become part of the official process because if it was then the inspectors would ask to see the output (i.e. the dated, signed printout from a particular run), annotated with the written results of the review of each warning showing how it had either been resolved or why it could be ignored. Even if this could have been done on the computer, the overhead would have been huge. So it was cheaper not to put lint in the official process, and the resulting loss in real quality didn’t cost us anything.

(Actually individual programmers did compile with gcc -Wall at least some of the time, which is the modern equivalent. But because this wasn’t part of the official process I don’t know how many did so, and there was certainly no independent review of their decisions to fix or ignore warnings).

And despite our best efforts it was simply impossible to comply with 100% of the rules 100% of the time. The FDA knows this of course, so its inspectors just hang someone from time to time to encourage the others. Most of the QA people in the industry are ex-FDA people, so they know how the system works.

(Side note: in 1997 the FDA justified their regulation of medical device design on the grounds that over 50% of device recalls were due to design defects. I never saw any figures for the design defect recall rate *after* they imposed these regulations).

In short, I believe that any attempt to impose quality on software by inspection and rule books is doomed to failure. This approach works in manufacturing and civil engineering because there a good safe product *is* the result of following the rules. But software engineering is nowhere near that mature, and may never be because software is always invented rather than manufactured: as soon as we reduce the production of some category of software to a set of rules that guarantees a good result we automate it and those rules become redundant.

So, back to security. Much as I dislike the current state of computer security, I don’t see liability or regulation as answers. I’ve seen regulation, and I don’t think liability would look any different because it always comes down to somebody outside the company trying to impose good practice by writing a rule book for software development that the company must follow (and prove it has followed) on pain of bankruptcy.

It might be argued that an insurance market would seek the least onerous and most effective rulebook. I disagree. All forms of insurance have the fundamental problems of “moral hazard” and “asymmetrical information”, both of which come down to the fact that the development company knows a lot more about its risk than the insurer. From the outside it is very difficult to tell exactly what a software company is doing and how well it is doing it. As long as security improvement requires time and thought I cannot see any effective way to tell whether the right amount of thought has been dedicated to the subject.

At the top I said that enforcing liability would increase costs and barriers to entry, and thereby reduce competition.
Obviously risk brings cost, either in the money that has to be set aside to cover it or to pay insurance. The extra work required to preserve evidence of good practice will also increase costs. Finally, these costs will fall hardest on the start-up companies:

  • They are always short of money anyway
  • Setting up the process requires up-front work, and having your process inspected by possible insurers to get a quote is going to be expensive too
  • Maintaining the evidentiary chains is particularly hard when you are trying to modify your product in response to early feedback
  • Insurers will prefer companies with an established track record of good practice and secure software, so start-ups will have to pay higher prices

Put all of these together, and it really does add to the costs for small companies.

But suppose despite all the obstacles listed above we have our software, and its more secure. Not totally secure, of course, because there ain’t no such thing. Lets say its a web framework, along the lines of Rails or Twisted. It gets incorporated into a banking system, along with a web server, a database, an open source token-based security system and a set of front-end web pages written by a bunch of contractors being bossed about by some project manager employed by the bank. And despite everyone’s best efforts, next month a few thousand people have their accounts cleaned out. It seems they fell for a targetted trojan that performed a man-in-the-middle attack and then used a defect in the web pages to inject SQL commands that, due to another defect in the database, were able to disable the bank’s monitoring of suspicious transactions. Lawyers representing the customers, the bank, all the contractors various liability insurance companies, the database vendor and the web framework vendor are suing or threatening to sue. Industry observers think it will be a good year or two before some kind of settlement is arrived at. And what about the open source developers of the security system? It seems that the ones in the UK will be fine, but the ones in the US have already received writs. The law says that developers are only liable if they received payment for the development, so they thought they were safe. But their project web server was partly funded by a donation from the bank, and according to the database vendor’s lawyers that is enough to establish liability.

5 comments:

Anonymous said...

i'd say that if software developers had to take out insurance, perhaps different insurance companies would come up with different guidelines and best practices, and would get to the bottom of what worked well, because it was costing them more money.

having said that, i don't think mandatory liability makes any sort of sense. right now, users and software vendors are perfectly able to enter into consensual agreements specifying liability. the fact is, they don't, because it's really hard to warrant security of a whole system, and when you've got pieces interacting, now even the documentation has to be absolutely unambiguous, lest you create bugs in the interfaces.

really i think the solution is what's going on today. you use free (open source) software, which can be looked at my many eyeballs. this helps audit the software, but more importantly, makes any changes that happen under scrutiny. then you engineer the system so that even if one component fails, you don't have a breach.

Weavejester said...

Perhaps the regulations for building medical devices are more stringent than the regulation covering the gathering and storage of pharmaceutical data, but I have not seen anything near the difficulties you outline.

I help develop software to capture and record medical data in pharmaceutical trials, and have done for a couple of years now. Whilst this is a relatively short time, it is perhaps long enough to get some understanding of the regulatory processes governing software designed for medical use, at least as it pertains to software development.

A considerably greater proportion of effort goes into ensuring that data is stored correctly and safely than would with "normal" software. Detailed audit trails are a must, and the QC can become very involved and exact. However, it's been my experience that things aren't nearly as bad as you make out - complying with regulations and passing customer audits is not a feat of impossibility: we do it all the time.

Jeff said...

Software security will stay the same or get worse, and then be used as an excuse to bring in "Trusted Computing" which will be the end of freedom in computing.

Paul Johnson said...

weavejester: have you had real FDA inspections, or just customer audits?

Paul.

Grahame Grieve said...

Agree with this, though you could've leveraged the economics a bit harder. The fact is that in some domains, the users have a choice, and choose to buy insecure software at 1% of the price.

As for weavejester - sounds like standard healthcare development, where as your experience sounds not so much medical devices as a device intended for implantation.