Thursday, November 29, 2007

Is Comcast's Packet Spoofing a Federal Crime?

The EFF has gathered evidence showing that Comcast is deliberately disrupting P2P traffic by spoofing RST packets to appear to come from the other end of the connection. See the EFF report for the technical details.

The US Criminal Code Title 18 Part 1 Title 47 Section 1030 covers "Fraud and Related Activity In Connection With Computers". I'm not a lawyer, but here is my understanding of the relevant bits of the statute (quotes from the statute are in italics):

Jursidiction: a "Protected Computer" is defined, amongst other things, as any computer "which is used in interstate or foreign commerce or communication, including a computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States". In other words if your computer is on the Internet, even if its outside the US, then its a Protected Computer. That includes anything connected via Comcast, and anything that talks to any computer connected via Comcast.

Offence: there are two things to prove here:
  1. That someone employed by Comcast "knowingly causes the transmission of a program, information, code, or command, and as a result of such conduct, intentionally causes damage without authorization, to a protected computer". Damage is defined as "any impairment to the integrity or availability of data, a program, a system, or information". A spoof RST packet instructs the receiving computer to drop a TCP connection, so it is a command that impairs the availability of data. I have no direct evidence that these packets were sent knowingly, but I find it difficult to imagine a scenario in which they were sent by accident.
  2. That this action caused "loss to 1 or more persons during any 1-year period [...] aggregating at least $5,000 in value". "Loss" is defined as "any reasonable cost to any victim, including the cost of responding to an offense, conducting a damage assessment, and restoring the data, program, system, or information to its condition prior to the offense, and any revenue lost, cost incurred, or other consequential damages incurred because of interruption of service". This is a bit tricky, but people do value their time. $10 per hour is a pretty low wage, and many professionals charge many times that. If failed P2P connections have cost 500 Comcast users 1 hour each in wasted time then this threshold has been reached. You might also be able to make a case purely on the cost of running a computer and keeping it connected via Comcast. The professional IT people who have taken the time to run tests with packet sniffers could certainly count their time at a professional rate as "responding to an offence" and "conducting a damage assessment". There is also some evidence that Comcast inadvertently disrupted other protocols, including Lotus Notes and Windows Remote Desktop. These are used commercially and their disruption would have real financial impact. So while a detailed accounting remains to be done, it certainly looks likely that the $5,000 threshold has been reached.
Penalty: "a fine under this title or imprisonment for not more than 5 years, or both, [if] the offense was committed for purposes of commercial advantage or private financial gain". Comcast's attempts to block P2P protocols are prompted by their desire to keep costs down while seeming to offer an unrestricted service. That counts as "commercial advantage".

So it certainly looks like a Section 1030 offense has been committed that could see someone put in the pen for five years. Any Comcast customers want to call the FBI?

Tuesday, November 27, 2007

Disruptive Innovation and the Walmart Linux PC

The Everex TC2502 is being sold by Walmart for $200 (no monitor included), assuming it hasn't sold out a second time. Part of the reason its so cheap is that it runs Linux and OpenOffice.org instead of Windows and MS Office. The first run sold out within days, which is a strong clue that it was a lot more popular than expected. This doesn't mean popular in an absolute sense, just more than Walmart and Everex expected when they decided how many to get in stock. But its the relative popularity that counts: Walmart and Everex produced this box because they calculated they would turn a profit on whatever number they expected to sell. So the fact that they sold out fast means two things:
  1. A bigger profit than expected, which is nice for Walmart and Everex.
  2. Cheap Linux-based PCs have a market niche big enough to make them worth-while. Other manufacturers will have taken note. Expect imitators.
Anyone who has read The Innovator's Dilemma will recognise this pattern: a market incumbent listens to its best (i.e. richest and most profitable) customers, and in consequence makes its products bigger, better and progressively more expensive. The incumbent also finds it unprofitable to compete with narrow niche offerings at the bottom, because they are low quality and aimed at poor customers that can't afford the market leader. So it forgoes the bottom end of the market and concentrates on its nice profitable high-end models. However over time the bottom-end offerings improve, and so become a cost-effective choice for more and more customers. Eventually this starts to make serious inroads into the sales of the incumbent. But by then its too late. The incumbent must choose between cutting prices to compete with the newcomer or else continue to see its market share erode. Neither option will bring back the glory days, and historically many such companies went out of business surprisingly fast.

I'm quite sure that Bill Gates and Steve Ballmer have read The Innovator's Dilemma and seen this coming. There is a bit of MBA strategy theory that sums up their position nicely. Plot their product lines on a chart with two axes; market share and growth potential. The high-share-low-growth lines are "cash cows": they should be milked, and the profits put into high-growth lines. Eventually all cash-cows turn into dogs (low-share-low-growth) and these should be killed off. You just have to hope that by then you have some new cash cows to replace them.

Microsoft has two cash-cows (Windows and Office), and Microsoft has indeed been milking them for all they are worth. The money has been ploughed into a bunch of ventures over the years, but none of them look like becoming future cash cows.

Now we may be entering the final act. People have talked about Linux as a disruptive technology for the last decade, and for server operating systems it definitely has been. Its effectively killed off proprietary Unix, and put a serious dent in Windows. Microsoft continues to fight a rearguard action in this market, but its reliance on big corporate customers is becoming more and more obvious as it tries to separate its premium products (which rich big companies will still reliably pay for) from its lower end offerings. But on the desktop Windows and Office have continued to reign supreme.

Now for the first time in a decade a competing office suite is starting to nibble at the toes of the incumbent. Its not going to make a dent in Microsoft's quarterly numbers just yet, but the future can only go one way. Microsoft sells Windows and Office to PC builders for less than the retail prices, but it cannot let Windows and Office go onto a PC that sells for $200 because they would be giving it away: it would actually be cheaper to buy the PC with the software than to buy the software alone at retail prices. But as more people find that bottom-end hardware with Linux and OpenOffice.org makes a perfectly useful home PC at a fraction of the cost of the Microsoft alternative, so the market will grow. Microsoft may try to segment the market by offering a cut-down version of MS Office (maybe Word with a 20 page limit), but they are competing with a fully featured product. No matter what they do to Windows and Office, Linux and OpenOffice.org are going to look like better value to anyone who is on a tight budget.

Initially it will just be cash-strapped consumers who buy these boxes (students in particular are going to love them). But this is a one-way street. Every consumer who buys one of these boxes is a consumer who is never going to buy a Microsoft box again, even when they get rich. Why pay more to learn a different set of software? And they'll tell their friends about how well it works too. The flow of money into multiple vendors coffers will stimulate investment and competition. All the vendors will want a slicker, more fully featured Linux offering with a bigger repository of instantly downloadable free (in both senses) software. The resulting competition will be downright Darwinian, and the offerings are going to get very good very fast. Everybody is going to race up-market as fast as possible because thats where the real money is. At the moment that money is being taken by Microsoft, but not for long.

Add to this the famous network effects. Part of the reason MS Office dominates is that you need it to exchange documents with everyone else. But if that stops being true then another good reason to pay for MS Office disappears as well.

So I predict that Microsoft is going to be in serious trouble, probably within the next few years. Their existing cost base is tuned to making and selling ever bigger and better versions of their cash cows, and there is no way that they can cut this back to compete with Linux and OpenOffice.org on a cost basis. But if they can't compete then their cash cows are going to turn into dogs before they can be replaced. So Microsoft will be left with two dogs, a bunch of ventures that require investment, and no cash flow. Sure they have big cash reserves they can burn through, but thats not going to be enough even if their investors let them do it.

Friday, November 23, 2007

Its the disks that are the problem, not losing them

The UK news this week has been full of stories about the loss of 2 disks (presumably CDs or DVDs) containing all 25 million Child Benefit records. For those outside the UK, Child Benefit is several pounds a week paid to the mother of every child (i.e. under 16) in the UK. In most cases it gets paid directly into a nominated bank account.

This is one of the biggest data losses ever, if not the biggest. The government has been at pains to point out that the disks are probably just mislaid, and they don't contain enough data for criminals to actually use. Meanwhile the Opposition has been alleging government incompetence and calling for them to "get a grip". People are being advised to keep an eye on their bank accounts and not use their childrens' names as passwords.

All this misses the point. The problem was not that a couple of disks got lost, it is that a comparatively junior person could burn the entire database to a couple of disks, apparently on his/her own initiative, and without further controls. There was a time when copying 25 million records would have required substantial resources, such as an overnight run on the mainframe and a boxful of tapes. The sheer volume of data made it physically difficult to copy, or to lose. However Moore's Law has turned that big, slow, expensive job into a few minutes with a CD burner. If those two disks were CDs then the whole lot would also fit on a £10 thumb drive, or even a cell phone. The story referenced above suggests that in the past the National Audit Office (the place where the disks in question never arrived) have made their own copies and sent them to outside auditors. None of the commentators seem to have realised that the most probably route for the data to get into the hands of criminals is not the loss of an authorised copy but the creation and distribution of unauthorised copies.

There are supposed to be procedures in place, but its no surprise that they are not being followed; when was the last time you reached for the Company Procedure Manual to check on the detailed procedure for some simple action? Its also too easy to blame the middle manager who decided that a written procedure was better than implementing a software access control system. Given a choice between implementing costly access controls and writing a procedure for making copies, which would you chose? Now try it again, but imagine that your annual evaluation is going to suffer if you "waste" money doing something that has no positive benefit on your departmental targets.

I would like to think that this incident will be a wake-up call for the civil service to revamp its data control procedures. However I doubt it. A scapegoat has already fallen on his sword (to mix the commonest metaphors). The government is keen to show it is doing something, but mostly to counter the opposition claims of incompetence. And the opposition is more interested in a ministerial scalp than in actually pushing for effective action. What is really needed is an audit of all databases containing UK citizen personal information, followed by a study into the necessary forms of access and the implementation of software-based authorization and logging mechanisms. But nobody in authority seems to be thinking along those lines.

The sad thing is that the Ministry of Defence has had hundreds of years experience in dealing with sensitive and secret data, and they have become quite good at it. Perhaps they should give the rest of the government some lessons.

Thursday, September 20, 2007

The Rise of Modern Morality

Every so often I read a letter in a newspaper or some other forum about "the decline of modern morality", in which the writer laments the failing moral standards of private and public life. I'm old enough to have been reading these articles for two or three decades now, and I've seen some samples of the same genre from past times. Elvis Presley caused a moral panic in 1956 by gyrating his hips on stage, and this was cited by commentators at the time as corrupting the morals of young people. Much has been written about the sociology of moral panics and I don't propose to repeat it here. Instead I want to argue that, far from declining, modern morality is actually superior to moralities of the past (and I use the plural deliberately).

  • In 1970 in the UK it was not only legal but widely accepted practice to pay a woman less than a man for doing the same job. Women who wanted a life outside of home making and child rearing were seen as aberrant, and often regarded with scorn.
  • In 1960 in America many parts of the country segregated public facilities by race, with black people consistently and blatantly short-changed. This invidious system was widely supported by prominent politicians and churchmen.
  • Even into the 1970s in Australia aborigine children were forcibly removed from their parents and placed in institutions where they were denied proper education and were terribly vulnerable to physical and sexual abuse. Again, this programme was considered perfectly proper and moral by the standards of the time.
  • There was a standing assumption for many years that unmarried mothers should immediately give up their children for adoption. In Ireland they were also incarcerated in the "Magdalene Laundries". Other countries had similar systems. Strangely, the fathers were left to go free.
  • For most of the 20th century, when it became necessary to remove children from their parents because of neglect or abuse little thought was given to keeping siblings together: they would be split up to suit the convenience of potential adopters or fosterers.
  • Until recent years in most Western countries, homosexuals were persecuted and discriminated against, both by the law and by society at large.
  • Until the 90s in the UK drink-driving was considered a minor peccadillo. Drivers convicted of the offence were more likely to encounter sympathy at the unfair attitude of the legal system than censure at their reckless disregard for the safety of others.
Not all of these evils are completely gone, but in all cases there has been a substantial moral shift. I know that some readers will look at some of this list, especially the tolerance of homosexuality, and regard this as evidence of moral decay rather than ascent. But this brings up another issue I have with the Cassandras of moral decay: their belief in moral absolutism; that right and wrong are entirely self-evident, and that any deviation stems from lack of morality rather than a genuine disagreement over what the moral course is.

The problem with moral absolutism is that (in its commonest form) it asserts that the perfect moral code has already been revealed and the only possible improvement is in closer adherence to that code. However I describe myself as a Liberal Utilitarian (although I confess I haven't actually read any of Mill's work). Hence I see "the greatest good of the greatest number" as a basic moral principle, while at the same time acknowledging that there may be a lot of disagreement about exactly what that means and how to bring that about. Liberal utilitarianism recognizes no absolute moral imperative apart from the general principle that life, liberty and the pursuit of happiness are good things. However by retaining that anchor it avoids the charge often leveled by the Absolutists that those who question their moral code are "relativists" who believe that any moral code is as good as any other. Liberal utilitarianism examines the impact of moral rules on the real people affected by them. If different rules would have a better outcome then those rules are automatically better. I see the arguments (both past and ongoing) about freedom, equal opportunities and tolerance to be a part of this process, and I see a steady improvement through history. Those who object to the current crop of improvements in morality because they contradict their particular absolutist code would do well to read their history books.

Friday, September 7, 2007

Windows Disaster Recovery with Bacula

A few days ago my wife asked me to look at her computer. Programs were taking ages to respond even to a simple mouse click, or just crashed. And the system box kept making odd clicking noises.

I confirmed her diagnosis of a failing hard drive and shut the computer down. The following lunchtime I drove to the nearest PC World and bought a replacement hard drive. I normally buy mail-order, but we wanted the computer back up as soon as possible. That evening I removed the faulty drive and installed the new one, and I also stuck in some extra RAM while I had the case open. Then it was time to find out if my rather sketchy disaster recovery plan was going to work.

I run Fedora on my own box, and both computers are backed up using Bacula. This is designed for backing up lots of computers to tape drives, but it also supports backup to disk drives. I have two USB drives, keep one plugged in, and swap them every month or so. I use the default Bacula backup schedule of a weekly full dump on both machines and nightly incremental dumps. Most of the Bacula components only run on Unix, but there is also a Windows client installed on my wife's computer (which runs Windows XP SP2). Bacula configuration is rather hairy, but if you have more than one computer its well worth the effort.

The big headache for restoring Windows is the Registry. After previous bad experiences with this horrible blob of data I had made sure the registry was backed up by using regedit to dump the registry contents to a file before each Windows backup, so I hoped it would be OK. There are also problems with other files in C:\Windows that are constantly in use and therefore unwritable during system restore.

The Bacula manual pointed me at Bart PE Builder, which generates a Live CD version of Windows XP. It suggested that you might be able to run Bacula in this environment to get around these problems. Bacula ran fine, but I couldn't get the drivers for the network card to install, so that wouldn't work.

Instead I re-installed Windows XP SP1 from the original CD and ran Bacula there. The restore duly dumped copies of C: and D: in C:/temp/restore/c and /d. (The computer has two partitions because it used to have two drives. My wife got used to having C: and D: drives and various programs had been configured to look for data on D:, so I kept the layout even when it went back to a single drive.). Then I booted under the Bart PE disk, deleted the existing contents of C: (except for /temp) and copied the restored contents into the root directories. Then for the big test: would it reboot?

Well, sort of. The login screen came up, but when I tried to log in the computer hung. I tried rebooting in safe mode, and then in "safe mode with console", which seemed to be the bare minimum. That at least got me a command line, so I ran regedit and imported the registry copy that had been created by the last backup. This warned me that some registry keys could not be modified and claimed that the changes had failed. I rebooted again, and this time found that I could log in, but pretty much all the settings had been forgotten. Office wanted to re-install itself, and every time I started Word it asked for my name and initials twice. This suggested that important parts of the registry were not only still not recovered, but also unwritable.

Windows also decided that it was running on new hardware, and badgered me to activate it. So I did. I then tried logging in as a different user and restoring the registry again in the hope that being a different user would lock different bits of registry. This merely overwrote the new activation data and Windows now point blank refused to let me log in at all until I activated it again. So I tried. This time it told me that I had exceeded my activation limit and would have to phone up for activation. So I did. Windows gave me a 36 digit number and a robot on the other end of the phone line told me to type in the number. Then a polite gentleman named "Fred" with an Indian accent asked me how many computers I had Windows installed on and why I needed to activate it. Then he gave me another 36 digit number to type into Windows to activate it. This worked. But when I logged in Windows wasn't behaving any better.

A bit of Googling reminded me of something I should have remembered much earlier: Windows occasionally checkpoints its critical state, including the Registry, and you can wind it back to a previous known good state using the System Restore function. So I located a checkpoint from before all the trouble started, restored Windows to that state and rebooted.

When I tried to log in I got immediate joy: the desktop background had been restored. This suggested that the registry was now intact. But this restore had also overwritten the registration data, and Windows once again demanded to be activated before I could log in. Back to the phone, this time to a polite woman named June with a much stronger Indian accent who asked me the same questions and gave me yet another 36 digit number to type in. This time everything worked. Disaster recovery was completed.

I'd like to thank the authors of Bacula for their excellent backup program. It saved both of us a lot of heartache. In the past I've found it too easy to neglect backups, and sometimes our computers have gone for months without being backed up. When I got it properly configured (not a trivial task) Bacula made backups automatically with minimal intervention by me (basically, swapping USB drives occasionally). That meant I had a good recent backup to work from.

I'd also like to thank Bart Lagerweij, author of Bart PE. I could probably have managed by booting Knoppix and using its NTFS driver capture facility, but having a native Windows environment made life much easier.

Saturday, September 1, 2007

Composability and Productivity

This was posted to my original blog on January 7th 2007. It was also discussed on Reddit, so please do not repost it.

----------------------------------------------------

My earlier post about increased productivity through functional programming stirred up a good deal of comment. A number of people replied that the libraries are more important than the language. More recently a similar point has been made by Karsten Wagner, who argues that code reuse is what makes languages productive.

I remember the early days of OO, when it was argued by many people (me amongst them) that OO languages would finally let us write reusable software. There was optimistic talk of “software factories”, and Brad Cox gave lots of talks about how software could now finally move from craft to engineering discipline, built on the back of libraries of reusable code. So the entire industry went for C++, but the easy creation of reusable software remained elusive. That is not to say you can’t ever produce reusable software in the language, but it is not significantly easier than in C.

Karsten accurately pins the reason why C++ failed to deliver: memory management. Back in those old days I was trying to explain to people why Eiffel would be so much more productive than C++, and garbage collection was a big part of it.

The reason why GC is so important lies in a more general principle called composability. Composability means that you can put two bits of code together and important correctness properties will be preserved automatically. This does not mean, of course, that the composition is automatically correct in a wider sense, but it does mean that you don’t introduce new bugs merely by sticking two things together.

Imagine two modules of code in a non-GC language like C or C++. Module X creates an object and hands a reference to that object over to Module Y. At some point in the future X will delete that object. However it is only safe to do so once Y has finished with it. So if X and Y were written independently then it is quite possible that Y will hang on to the reference longer than X expects. In short, manual memory management is not composable, because the composition of X and Y can introduce stale pointer bugs that were not present in either X or Y.

In a language with GC this is a non-problem: the collector will reap the object once it sees that both X and Y have finished with it. X can therefore forget about the object in its own time without having to know anything about Y. But programmers in non-GC lanuages have to resort to a number of workarounds. Either objects have to be copied unecessarily (which leads to stale data bugs instead of stale pointer bugs), or else some kind of reference counting or similar scheme must be employed. Reference counting is of course merely an ad hoc, informally-specified, bug-ridden, slow implementation of GC, and therefore stands as a classic example of Greenspun’s Tenth Rule.

But programming languages contain other examples of non-composable constructs. The current biggest offender is shared memory concurrency using locking. If X takes locks Foo and Bar, and Y takes locks Bar and Foo (in those orders), then sooner or later they are going to deadlock with X holding Foo and Y holding Bar. Ironically Java has the biggest problems here.

Surprisingly, program state generally is a source of non-composability. Mutable state is actually another form of manual memory management: every time you over-write a value you are making a decision that the old value is now garbage, regardless of what other part of the program might have been using it. So suppose that X stores some data in Y, and then Z also stores some other data in Y, overwriting what X did. If X assumes that its old data is still there then it is going to be in trouble. Either X needs to defensively check for new data, or Y needs to tell X about the change (the Observer pattern). Whichever way, the addition of Z to the system can introduce bugs that stem from the composition of modules rather than the modules themselves.

Another example is the Command pattern, which includes a method to “undo” a previous command. Why undo it? Because the global state has been changed. Get rid of the concept of a single unique state for the entire application and you get rid of the problem.

On a more prosaic level, many library modules have some internal state, and so require an initialisation call to “set things up”. Who exactly is responsible for this initialisation? If you compose two modules together then they may both imagine themselves soley responsible for making this initialisation call, and hence it will be done twice.

It is a truism of programming that “integration” is the most difficult and risky (from a scheduling point of view) part of developing a large system. Integration basically means composing together all the units to try to get a working system. And the biggest integration problems are due to the non-composability of the units. A defect within a unit is fairly easy to identify, but composition bugs are down to the interaction of two separate units in some unforseen way, which makes it correspondingly more difficult to pin them down.

So what would a language look like if it got rid of all of these non-composable constructs? The answer, basically, is Haskell. The underlying goal of functional language designers is to make everything completely composable. Haskell is the language that has advanced furthest towards this goal.

Erlang is pretty good as well. It doesn’t eliminate state, but it does keep it confined to individual processes: there are no shared memory locks. Processes have to communicate according to shared protocols, but this is a manageable dependency on a common abstraction. Even within a process the shared state can be reduced by careful use of functional programming rather than imperative.

Monday, August 27, 2007

Tax as Percentage of GDP

In the 17th century French economist Jean Baptiste Colbert said "The art of taxation consists in so plucking the goose as to obtain the largest possible amount of feathers with the smallest possible amount of hissing”. His observation is as true today as it was then.

Whenever the government in the UK decides to charge for something, especially if it was previously free, it is in turn charged with "stealth taxation". I don't know about other countries, but I imagine that the politics are similar. There is often a good reason why the government wants to levy a charge, which often has nothing to do with its overall level of income. An example is the recent proposal to charge by the kilo for domestic refuse collection: the "polluter pays" principle is generally recognised as sound and "free" rubbish collection is increasingly expensive. But not everyone agrees:

Mother-of-five Mandy Price, who has just begun to recycle but still produces an average of nine bin liners of rubbish a week, said the assembly could not justify introducing such a policy in the wake of council tax rises.

She said: "You pay your council tax for the local authority to come and collect your rubbish, so why should we pay more?"

In other words, the charge is perceived primarily as just another source of revenue rather than an attempt to shift the costs onto the people who actually use the service the most. The government response is always to say that this is going to be offset by lower taxes elsewhere (in this case, council tax), but this makes a very unconvincing soundbite.

Gross Domestic Product is a standard way of measuring the overall economic activity in a nation or economic area, and the usual way of measuring the tax burden on an economy is the ratio of taxation to GDP. For instance, Reform points out that the US takes 26.4% of GDP as tax, whereas the UK takes 35.8%. Of course, these figures are almost useless for international comparison because they don't say what the tax pays for. In the UK most health care is paid for by the government out of general taxation whereas in the US it is mostly paid for privately by employer-funded health schemes that get a tax break. The NHS is funded out of taxation, and so is charged to the government's account, but a scheme partly funded by a tax reduction (as in the US) is not. If the US were to eliminate the tax break and subsidise these schemes directly instead then the basic nature of the system would not change one whit, but the proportion of US GDP taken in tax would increase. The US spends around 16% of GDP on health care (compared to about 8% in the UK). From an employer's point of view taxation and healthcare are both just costs of doing business, and the overall burden of general taxation and healthcare in the US is actually higher than in the UK. As always you get what you pay for, but a worrying amount of political debate seems to revolve around which ledger the payment is recorded in.

However one place where such figures would be useful (but never seem to be used) is in domestic political arguments. The Liberal Democrats used to have a policy of adding 1% to the rate of income tax in order to fund improved education. Meanwhile the Tories are struggling with a reputation for aggressive cost-cutting in public services in order to fund tax cuts, and the Labour government has been increasing taxes in order to spend more on public services (with notable lack of effect, but thats another issue).

What I would like to see is for each party to declare a target level of taxation as a percentage of GDP. That gets their macro-economic policy on taxation out in the open, without confusing it with a lot of micro-economic questions over how it is collected. Thats not to say that those micro-economic questions are unimportant, but they need to be separated from the macro-economic debate about the overall level of taxation.

Wednesday, August 22, 2007

Microsoft versus FOSS Configuration Management

This was originally posted to my old blog on December 3rd 2006. It was also discussed on Reddit at the time, so please do not repost it.

--------------------------------------

Joel Spolsky writes about the Vista shutdown menu and its excess of confusing options (what exactly is the difference between Hibernate and Sleep?). Moishe Lettvin, who happened to have worked on that menu, chimed in with an explanation of why it came out that way, which included a fascinating insight into how Microsoft handles configuration management.

For the uninitiated, configuration management is a lot more than just version control. It includes managing all the libraries and tools used in a build, and if multiple components are being incorporated into the final product then it also involves keeping track of those. The goal is to be able to go back to some build that happened last year, repeat the build, and come out with the same MD5 checksum at the end. Stop and think about what that involves for a minute. Its highly non-trivial. (And some compilers happen to make it impossible by using multiple threads, so that two consecutive builds generate the same code, but with functions in a different order. Under some high integrity quality regimes this actually matters).

The Microsoft problem is that they have thousands of people working on Vista, and a simple repository with everyone checking stuff in and out simply won’t scale. So they have a tree of repositories about 4 deep. Each developer checks stuff in to their local repository, and at intervals groups of repositories are integrated into their local branch. Hence the changes propogate up the tree to the root repository, from where they then propogate back down the other branches.

The trouble is, propgation from one side of the tree to the other can take a month or three. So if developer A and developer B need to collaborate closely but are on distant branches of the build tree then their code takes ages to propogate between them.

Now consider the case of open source software. The Linux kernel and Windows are actually organised in very similar ways: Linus owns the master repository, people like Alan Cox own sub-repositories, and handle integration up to Linus. Sub-repository owners are responsible for QA on the stuff they merge in, just like in Windows.

So why is Windows Vista in trouble, but free / open source software is not? After all, GNU/Linux overall has tens of thousands of people developing for it, and thousands of packages of which the kernel is just one. Worse yet, these tens of thousands of people are not properly organised and have very little communication. Microsoft can at least order all its programmers to work in certain ways and conform to certain standards.

Actually this disconnection is a strength, not a weakness. Conway’s Law says the structure of any piece of software will duplicate the structure of the organisation that created it. So in Microsoft there are lots of programmers who all talk to one another, and this leads to software where all the bits are inter-dependent in arbitrary ways. Open source developers, on the other hand, are spread around and have very narrow interfaces between each other. This leads to software with narrow well defined interfaces and dependencies.

Dependencies are the crucial thing here: if I am writing a new application that uses, lets say, the Foobar library, then I will want to depend on a stable version. If I write to the API in the daily snapshot then my code could suddenly break because someone submits a patch that changes something. So I write to the last stable release. If I really need some feature that is still being developed by the Foobar team then I can use it, but that is an exceptional case and I won’t be releasing my application until the Foobar feature stabilises.

Dependency management is probably the most important contribution of open source to software engineering. The requirement first became obvious under Windows, with “DLL hell”: different applications installed and required different dynamic libraries, and conflicts were inevitable. Then Red Hat users encountered a similar problem in “RPM hell”, in which they had to manually track the dependencies of any package they wanted and download and install them.

As far as I know Debian was the first distribution to really solve the dependency problem with apt-get. These days Fedora has yum and pirut, which do essentially the same job. Its not a simple job either. A program may have a dependency not just on Foobar, but on Foobar version 1.3.*, or anything from 1.3 onwards but not 2.*. Meanwhile some other package may depend on Foobar 1.6 onwards, including 2.*, and yet a third package requires Foobar 2.1 onwards. It is the job of apt-get and its relatives to manage this horrendous complexity.

(Side note: I remember when OO software was young in the nineties, people thought that the biggest challenge for resuable software was searching repositories. They were wrong: it is dependency management).

Open source also has a clear boundary to every package and a very precise process for releasing new versions. So when the new version of Foobar comes out anyone interested finds out about it. Any changes to interfaces are particularly carefully controlled, so I can easily tell if the Foobar people have done anything to break my application. If they have I can carry on depending on the previous version until I can resolve the problem. Then before my application finds its way into, say, Fedora, it has to be integrated into an explicit dependency graph and checked for conflicts with anything else.

Microsoft doesn’t do this. Vista is effectively a big blob of code with lots of hidden dependencies and no effective management of what depends on what. No wonder they are in trouble.

Sunday, August 19, 2007

Anatomy of a new monad

The monad described here was originally written in response to a posting by Chad Scherrer on Haskell-Cafe:

I need to build a function
buildSample :: [A] -> State StdGen [(A,B,C)]

given lookup functions
f :: A -> [B]
g :: A -> [C]

The idea is to first draw randomly form the [A], then apply each
lookup function and draw randomly from the result of each.


I suggested Chad try using ListT for the non-determinism, but this didn't work, so I decided to try solving it myself. I discovered that actually ListT doesn't work here: you need a completely custom monad. So here it is. Hopefully this will help other people who find they need a custom monad. This isn't meant to be yet another ab-initio monad tutorial: try reading the Wikibook or All About Monads if you need one.

The basic idea of the MonteCarlo monad is that each action in the monad returns a list of possible results, just like the list monad. However the list monad then takes all of these possible results forwards into the next step, potentially leading to combinatorial explosion if you can't prune the tree somehow. It was this explosion that was giving Chad problems when he scaled up his original solution. So instead the MonteCarlo monad picks one of the results at random and goes forwards with that.

Picking a random element means we need to thread a random number generator StdGen as a form of monadic state. So the monad type looks like this:
newtype MonteCarlo a = MonteCarlo {runMC :: StdGen -> (StdGen, [a])}
This is based on the state monad, except that the type parameter "s" is replaced by StdGen. I could have used the state monad here, but it would have meant working its "bind" and "return" into MonteCarlo, which would have been more trouble than it was worth.

Picking a random item from a list is going to be necessary, and as seen below it is actually needed more than once. So:

-- Internal function to pick a random element from a list
pickOne :: [a] -> StdGen -> (StdGen, a)
pickOne xs g1 = let (n, g2) = randomR (0, length xs - 1) g1 in (g2, xs !! n)
Monad instance
Now for the Monad instance declaration. In this we have to declare the bind function (>>=) and the return function.
instance Monad MonteCarlo where
MonteCarlo m >>= f = MonteCarlo $ \g1 ->
let
(g2, xs) = m g1
(g3, x) = pickOne xs g2
(g4, f') = case xs of
[] -> (g2, mzero)
[x1] -> (g2, f x1)
_ -> (g3, f x)
in runMC f' g4

return x = MonteCarlo $ \g -> (g, [x])
The return function shows the minimal structure for a Monte-Carlo action: wrapped up in the MonteCarlo type is a function that takes a StdGen state and returns the state (in this case unmodified) along with the potential results. If the action had used the generator then the result pair would have had the new generator state instead of the old one.

The bind (>>=) is a bit more complicated. The type for monadic bind is:
(>>=) :: m a -> (a -> m b) -> m b
In the MonteCarlo instance of Monad this becomes:
(>>=) :: MonteCarlo a -> (a -> MonteCarlo b) -> MonteCarlo b

The job of bind is to take its two arguments, both of which are functions under the covers, and compose them into a new function. Then it wraps that function up in the MonteCarlo type.

We unwrap the first argument using pattern matching to get the function inside "m". The second parameter "f" is a function that takes the result of the first and turns it into a new action.

The result of the bind operation has to be wrapped up in MonteCarlo. This is done by the line
MonteCarlo $ \g1 -> 
The "g1" lambda parameter is the input generator state for this action when it is run, and the rest of the definition is the way we compose up the functions. We need to call the first function "m" to get a result and a new generator state "g2", and then feed the result into the second argument "f".

The "let" expression threads the random generator manually through the first two random things we need to do:
  1. Get the possible results of the first argument as a list.
  2. Pick one of these at random.
However there is a twist because the result set could be empty. This is interpreted as failure, and the "MonadPlus" instance below will explain more about failing. In addition the result set could also be a single item, in which case there is no need to generate a random number to pick it. So the f-prime value is the result of applying "f" to whatever item was picked, or an empty list if the "m" action returned an empty list. We also avoid getting a new generator state if the random pick was not needed.

Finally we use "runMC" to strip the MonteCarlo type wrapper off the second result, because the whole thing is being wrapped by the MonteCarlo type constructor at the top level of bind.

And there you have it. In summary, when defining an instance of "bind" you have to construct a new function out of the two arguments by extracting the result of the first argument and passing it to the second. This then has to be wrapped up as whatever kind of function is sitting inside your monadic action type. The arguments to this hidden function are the monadic "state" and therefore have to be threaded through bind in whatever way suits the semantics of your monad.

MonadPlus instance
MonadPlus is used for monads that can fail or be added together in some way. If your underlying type is a Monoid then its a good bet that you can come up with a MonadPlus instance from the "mempty" and "mappend" functions of the Monoid. In this case the underlying type is a list, so the "mempty" is an empty list and the "mappend" is concatenation.
instance MonadPlus MonteCarlo where
mzero = MonteCarlo $ \g -> (g, [])
mplus (MonteCarlo m1) (MonteCarlo m2) = MonteCarlo $ \g ->
let
(g1, xs1) = m1 g
(g2, xs2) = m2 g1
in (g2, xs1 ++ xs2)

Here the "mzero" returns an empty list. The bind operation interprets an empty list as failure, and so indicates this to its caller by returning an empty list. Thus a single failure aborts the entire computation.

"mplus" meanwhile threads the random number state "g" through the two arguments, and then returns the results concatenated. The bind operation will then pick one of these results, so "mplus" has become a kind of "or" operation. However note that the alternation is done over the results, not the original arguments. If you say "a `mplus` b" and "a" returns ten times as many results as "b" then the odds are 10:1 that you will be getting one of the results of "a".

One important combinator introduces a list of items to pick one from.
-- | Convert a list of items into a Monte-Carlo action.
returnList :: [a] -> MonteCarlo a
returnList xs = MonteCarlo $ \g -> (g, xs)
Finally we need to be able to run a MonteCarlo computation. You could do this using runMC, but that gives you the full list of results for whatever the last action was, which is not what you want. Instead you want to pick one of them at random. So there is a runMonteCarlo function that looks an awful lot like the bind operation, and for similar reasons. This returns a Maybe type with "Just" for success and "Nothing" for failure.
-- | Run a Monte-Carlo simulation to generate a zero or one results.
runMonteCarlo :: MonteCarlo a -> StdGen -> Maybe a
runMonteCarlo (MonteCarlo m) g1 =
let
(g2, xs) = m g1
(_, x) = pickOne xs g2
in case xs of
[] -> Nothing
[x1] -> Just x1
_ -> Just x
Exercises for the reader:
  1. Add a new combinator allowing access to the random number generator.
  2. Each computation returns only a single result, but Monte Carlo methods depend on statistical analysis of many results. How should this be done?
  3. Can we introduce a "try" combinator to limit the scope of failure?

New Software Technology: Blockage On Line

This was originally posted to my old blog on December 14th 2006. It was discussed on Reddit so please don't repost it there. Since it was posted even stronger evidence has emerged showing the productivity increase from functional languages.
-----------------------------------------------------

This is the promised posting about why new software technology finds it so difficult to gain acceptance even when major improvements are likely.

To give you some idea of the scale of the problem, in 1997 Ulf Wiger wrote a paper entitled Four-fold Increase in Productivity and Quality. It described a practical experience with a mature technology (the Erlang language) on a significant new system by a large company.

Now Ulf Wiger is a well known proponent of Erlang, so the uncharitable might suspect some degree of bias and selective reporting. But however skeptical one might be of Ulf Wiger’s claims it would be a big stretch to think that he had invented or imagined the whole thing. The most likely explanation is that the reported results are broadly accurate.

So how come we are not all now programming in Erlang? I believe the answer lies in the “hill climbing” approach that most companies take to optimising their behaviour.

If you are lost on a foggy mountain then one way to reach the top is to simply head up hill. You don’t need a map or compass to tell which way that is, and eventually you will reach a point where every way is downhill. That is the top. Businesses are very much in this situation. There are lots of things a business could do that might work to increase profits. Some are big steps, others are small. The likely result, especially of big steps, is shrouded in fog. So the best thing is to move up the hill of profitability in small steps. Eventually you get to the top, and then you can rest for a while.

The trouble with this algorithm is that you are likely to have climbed a foothill, and when the fog clears you see that the real mountain is somewhere else.

Here the analogy breaks down because unlike real mountains the business environment keeps changing. Mountains go up and down over millions of years. In business the equivalent happens much faster. In fact many businesses have to run as fast as they can just to keep up with the foothills.

So now what happens when someone claims to have discovered a huge mountain in the distance? Three questions will immediately occur to the managers:
  1. Is it real?
  2. Will we survive long enough in the Lowlands of Unprofitability to get there?
  3. Will it still be there when we arrive?
All three are extremely good questions, and I’ll analyse them in detail below. For brevity I’ll talk about new programming languages but the same arguments apply to many new software technologies, especially the ones that affect the way that you program.

Is it real?

Managers in the software business are bombarded by sales pitches for things that will make their software faster, cheaper and better. 90 out of 100 of these pitches are for pure snake oil. A further 9 are stuff that will work, but nowhere near as well as advertised. The last 1 will change the world, and possibly make you a fortune if you time it right. The trouble is, how do you find that diamond in all the dross? Each new sales pitch requires a lot of time and effort to evaluate, most of which will give no return on the investment. And in the meantime there are those foothills to keep up with. So managers learn to listen to the sales pitch, nod sagely, and then carry on as before.

I say “managers” because they are usually the ones who make the decisions, and are therefore the target of the sales pitches. Sometimes they can be evaded. The early days of Linux adoption were punctuated by anecdotes of IT managers declaring that Linux was verboten in their shop, only to be gently told that it was already running some piece of key infrastructure.

Will we survive long enough to get there?

At first sight a new programming language looks simple to deploy: just start using it. Unfortunately things are not that simple.

Any significant project is going to require a team of developers, and then on-going maintenance and development of new versions. This means putting a team of people together who all know the language, and then keeping them on the staff. Do you train them? If so how long is it going to take them to get productive? In the days of the great OO paradigm shift it was generally agreed to take months. On the other hand you could hire them, but how many people out there know the language? Probably not very many. Either way, if somebody leaves then replacing them will be problematic.

A software house that has been earning money for a while will have been doing so on the back of some body of software (the exception being pure “body shops” who just write code for customers). This software is the major strategic asset of the company, and in practice most of the development effort in the company is devoted to maintaining and extending existing packages. The only way that you can apply a new programming language to an existing software package is to throw away and rewrite the whole thing. At the very least this is a huge and risky enterprise: revenue from the old legacy will drop off fast if you stop developing it, and in the meantime you just have to hope that the new system gets delivered on time and on budget, because if it doesn’t you will go bust. Of course a rewrite of this sort will eventually be necessary, but the sad thing is that by then the company is not in good enough financial shape to take the project on.

Most software companies have diversified and do not depend on one monolithic software asset, so in theory you could do the rewrites one system at a time. This is still expensive and risky, but at least you don’t have to bet the company. But typically each major asset has a division looking after it, and from within the division the sums look just the same as for a smaller company with one big asset. So the only people who can make such a decision are the board of directors. I’ll come back to this point later.

The last option for a new programming language is a completely new product line. Here you are starting with a clean sheet. You still have training and recruitment issues, not to mention long term support, and you have to put together a whole new toolchain for the developers, but the proposition does at least look sensible.

Will it still be there when we arrive?

New technologies often don’t hit the big time. If the suppliers go out of business, or the open source community loses interest, then anyone who adopted the technology early is going to be left high and dry. A previous employer of mine opted for Transputers in a big digital signal processing project. The Transputer was technically ideal, but then INMOS went out of business.

Geoffrey Moore has described a “chasm” between the Innovator market (who will buy anything just because it is new) and the Early Adopters (who make a rational decision to invest in new things). I’m not convinced that there is really a chasm: people seem to have continuous variations rather than discrete types. But either way there is a real obstacle here. In effect everyone is waiting for everyone else to jump first.

So those were the rational reasons why companies tend to avoid new technology. Now for the, err, less rational reasons.

Most of these come down to the fact that companies are not like Hobbe’s Leviathan. As described in The Tipping Point, once you get past about 150 people in an organisation the people in it cannot keep track of everyone else. Hence you find lots of people at all levels working hard to optimise their bit, but inadvertently messing up the stuff done by someone else. Bear with me while I take you on a short tour of the theory and practice of management motivation.

Companies try hard to reward people who do the Right Thing (at least, the Right Thing for the company). This generally means short term evaluation on how they are doing at their main task. Sales people, for instance, get paid partly by commission, which is a very direct linkage of short term performance to pay. Other people get annual bonuses if their bosses recommend them for it, and of course promotion happens from time to time. And its backed up by social pressure as well: these people are being rewarded for doing the Right Things, and everyone else takes note.

All of this is absolutely vital for a company to survive: you have to keep your eye on the ball, nose to the grindstone and ear to the ground. However, as Clayton Christensen describes in The Innovator’s Dilemma, it also leads to a problem when a “disruptive technology” arrives.

An example of what goes wrong was Xerox PARC. As is well known, the researchers at PARC pretty much invented the modern Office software suite, along with graphical user interfaces, laser printers and ethernet. The usual myth has it that Xerox executives were too dumb to realise what they had, but the real story is more interesting. Xerox did actually go to market with a serious office machine, called Xerox Star. You or I could sit down at one of those things and feel right at home. But when it was launched in 1981 it only sold 25,000 units, which was far too few to make a profit.

The reason (I believe, although I haven’t seen this anywhere else) is that Xerox salesmen (and they were almost all men at that time) were experts at selling big photocopiers to big companies. That was the bread-and-butter of Xerox business, and the quarterly bonuses of those salesmen depended on doing that as much as possible. Anything else was a distraction. So when this funny computer thing appeared in their catalog they basically ignored it. If someone specifically asked for some I’m sure that any salesman would be happy to fill the order, but they weren’t going to waste valuable face time with corporate buyers trying to explain why a whizzy and very expensive piece of equipment was going to revolutionise everything. So Xerox concluded that there was no market for networked desktop computers and sold the whole concept off to Steve Jobs in exchange for some Apple stock.

Christensen has a number of other examples of this phenomenon, all of which are market based. This is probably because you can observe the market behaviour and success of a company, whereas just about everything else they do tends to be visible only on the inside, and often not even then. But the same logic applies.

Suppose you are a project manager, entrusted with Project X to develop some new software. You have had your plans and associated budget approved by the Committee That Spends Money (every company has one, but the name varies). And then some engineer walks into your office and starts talking about a programming language, ending with “… so if you used this on Project X you could do it for a quarter of the cost”.

Now, strange to relate, a project manager will not actually be rewarded for coming in 75% under budget. Instead he (even today it is usually “he”) will be told off for not submitting a better estimate. Senior managers do not like padded estimates because it prevents the money being invested more profitably elsewhere. Coming in a bit under your original estimate is OK: it shows you are a good manager. But coming in way under shows you are either bad at estimation or just plain dishonest (managers watch Star Trek too). Besides, you already have approval for your original plan, so why bother changing course now?

But you have also been around a bit longer than this engineer, and have seen some technology changes. So you ask some pertinant questions, like who else has used it, how long it will take the programmers to learn it, and where the support is going to come from. At the end of this you conclude that, even if this technology is as good as claimed, if you use it on Project X you stand a very good chance of blowing your entire budget just teaching your programmers to use it. This will not get you promoted, and might even get you fired for incompetence. So you thank the engineer for bringing this matter to your attention, promise to look into it carefully, and show him the door.

So now the engineer tries going up the ladder. Next stop is the Product Manager, who looks after the product line that Project X will fit into. He can see that there just might be a case for making the investment, but he has already committed to a programme of improvements and updates to the existing product line to keep it competitive. His annual bonus depends on delivering that plan, and this new language will obviously disrupt the important work he has been entrusted with. So he too thanks the engineer and points him out of the door.

Next stop is the Chief Technology Officer. He is vaguely aware of programming languages, but being a wise man he seeks advice from those who understand these issues (most geeks will find this surprising, but very few senior managers got there by being stupid). Meaning, of course, the project and product managers mentioned earlier, possibly with a trusted engineer or two as well.

These engineers know about programming. In fact they owe their position to knowing more about it than anyone else. This new language will make that valuable knowledge obsolete, so they are not well disposed to it. On top of that they find the technical arguments in favour of the new language highly unconvincing. Paul Graham has christened this phenomenon The Blub Paradox. If you haven’t already read his essay please do so: it explains this far better than I ever could.

In short, everyone in the company with any interest in the selection of a new programming languge can see a lot of very good reasons why it would be a bad idea. The only people who disagree are the ones who have taken the trouble to learn a new language and understand its power. But they are generally in a minority of one.

And this is true in every company. Every company has a few eccentric engineers who try to explain why this or that new technology would be a great investment. Sometimes they are even right. But they are almost never taken seriously. And so great technologies that could actually save the world a great deal of money on software development (not to mention improve quality a lot as well) languish on the shelf.

Monday, August 13, 2007

Why Making Software Companies Liable Will Not Improve Security

(((This was originally posted to my old blog on January 28th 2007. However my host Blogthing went down almost immediately thereafter, so as far as I know almost nobody saw it. I'm now reposting it in response to the recent House of Lords report on e-crime. Bruce Schneier has commented approvingly on the report, including its recommendations for liability for security defects in software. So I think that now is a good time to repost this, including to Reddit.)))

----------------------------------

Bruce Schneier has written that the way to improve the current lamentable state of computer security is to impose mandatory liability on vendors for security breaches. I disagree: I think that this would have little positive impact on security, but a lot of negative impacts on the software industry generally, including higher prices, increased barriers to entry, and general reduced competition.

This is a bit worrying: Schneier is a remarkably clever guy who understands software, security and (at least a bit of) economics. His understanding may well exceed mine in all three areas . I’m used to nodding in agreement whenever I read his words, so to find myself shaking my head was a strange experience. Hence this blog post: whether I turn out to be right or wrong, I will at least have scratched the itch.

So, to the argument:

On the face of it, Schneier’s argument is impecable economics. Security is an “externality”, meaning that all of us pay the price for bad security, but only the software vendor pays the price for good security. In theory we should demand secure software and pay higher prices to get it, but in practice most people cannot accurately evaluate the security of a software system or make an intelligent trade-off about it. So system vendors (including, but not limited to, Microsoft) find it is more cost effective to trumpet their wonderful security systems while actually doing as little as possible. Security is more of a PR issue than a technical issue.

So the Economists Solution is to push the costs of bad security back on to the vendor, where they rightfully belong. Therefore you and I should be able to sue someone who sells us software with security defects. That way if we suffer from virus infections, spambots or phishing because some software somewhere was defective, we should be able to sue the manufacturer, just as I could sue a car manufacturer if I am hurt in a crash due to defective brakes.

So far, so fine. But now imagine you are a small software company, such as the one run by Joel Spolsky. You sell a product, and you are making a reasonable living. Then one day a process server hands you a writ alledging that a cracker or crackers unknown found a defect in your software, and used it to cause series of security breaches for many of your customers, followed by downtime, loss of business, theft from bank accounts, and a total of a million dollars of damages. It could easily put you out of business. Even if the claim is slim and the damages inflated, you could still have a big legal bill.

Obviously this is not useful: the point is to encourage the vendors to do better, and while hanging the worst one occasionally may encourage the others, doing so by a lottery won’t.

Part of the problem is that the economic logic calls for unlimited liability. So it doesn’t matter whether you sold the software for $5 or $5,000, you are still on the hook for all damages due to security defects. Of course the law could move to a limited liability model, capping it at, say, the price of the software, but that is still too big for most companies to pay out. Even if it was 10% of the price of the software, Microsoft is probably the only company with a big enough cash pile to survive an incident that hit 50% of its users. But 10% of the price of a piece of software is going to be only a very tiny fraction of the real cost of a security incident. It looks a lot more like a fine than real liability.

So if you are a software vendor then how do you protect yourself from such an incident? Of course you can tighten up your act, which is the whole point. But no amount of checking and careful architecture is going to protect you from the occasional defect that blows everything wide open.

You could buy insurance. Lots of professions have to carry liability insurance: its just a cost of doing business. Insurers will want to see you take proper steps to avoid claims, but will then cover you for the ones that do happen. Or, more or less equivalently, there could be a “safe harbour” clause in the liability law. If you can show that you have taken all proper steps to ensure the security of your system then its just bad luck for the customer and you are not liable.

The trouble with both of these solutions is that we do not have any way of deciding what the “proper steps” to either maintain your insurance cover or stay in the safe harbour are. There are lots of good practices which are generally thought to improve security, but they actually depend more on motivated people than anything else. From the developers point of view, the need to develop secure software is replaced by the need to reach the safe harbour. If the approved practices say you do X then you do it, and ensure that a written record exists to prove that you did it. Whether doing X actually improves the security of your product is irrelevant.

I’ve seen this effect personally. For a while I worked in an industry where software defects *do* give rise to unlimited liability, and where the government inspectors check that you are following the approved process. The industry was medical devices, and the government department was the FDA. The entire focus was on the paper trail, and I mean paper: signed and dated pieces of paper were all that counted unless you could prove that your CM system was secure according to yet another complicated and onerous set of rules (which we couldn’t). Worse yet, the inspectors wanted to see that you worked from the same records they were auditing, so you couldn’t even keep the training records on a database to find out who still needed what training: it was the signature sheet or nothing. In theory we weren’t even allowed to do software development on a computer, although in practice the inspectors merely noted that we were not in compliance on that point.

The impact of these rules on everyday work was huge and often counterproductive. For instance, it might have been a good idea to run lint or a similar tool on our C code. But this could never become part of the official process because if it was then the inspectors would ask to see the output (i.e. the dated, signed printout from a particular run), annotated with the written results of the review of each warning showing how it had either been resolved or why it could be ignored. Even if this could have been done on the computer, the overhead would have been huge. So it was cheaper not to put lint in the official process, and the resulting loss in real quality didn’t cost us anything.

(Actually individual programmers did compile with gcc -Wall at least some of the time, which is the modern equivalent. But because this wasn’t part of the official process I don’t know how many did so, and there was certainly no independent review of their decisions to fix or ignore warnings).

And despite our best efforts it was simply impossible to comply with 100% of the rules 100% of the time. The FDA knows this of course, so its inspectors just hang someone from time to time to encourage the others. Most of the QA people in the industry are ex-FDA people, so they know how the system works.

(Side note: in 1997 the FDA justified their regulation of medical device design on the grounds that over 50% of device recalls were due to design defects. I never saw any figures for the design defect recall rate *after* they imposed these regulations).

In short, I believe that any attempt to impose quality on software by inspection and rule books is doomed to failure. This approach works in manufacturing and civil engineering because there a good safe product *is* the result of following the rules. But software engineering is nowhere near that mature, and may never be because software is always invented rather than manufactured: as soon as we reduce the production of some category of software to a set of rules that guarantees a good result we automate it and those rules become redundant.

So, back to security. Much as I dislike the current state of computer security, I don’t see liability or regulation as answers. I’ve seen regulation, and I don’t think liability would look any different because it always comes down to somebody outside the company trying to impose good practice by writing a rule book for software development that the company must follow (and prove it has followed) on pain of bankruptcy.

It might be argued that an insurance market would seek the least onerous and most effective rulebook. I disagree. All forms of insurance have the fundamental problems of “moral hazard” and “asymmetrical information”, both of which come down to the fact that the development company knows a lot more about its risk than the insurer. From the outside it is very difficult to tell exactly what a software company is doing and how well it is doing it. As long as security improvement requires time and thought I cannot see any effective way to tell whether the right amount of thought has been dedicated to the subject.

At the top I said that enforcing liability would increase costs and barriers to entry, and thereby reduce competition.
Obviously risk brings cost, either in the money that has to be set aside to cover it or to pay insurance. The extra work required to preserve evidence of good practice will also increase costs. Finally, these costs will fall hardest on the start-up companies:

  • They are always short of money anyway
  • Setting up the process requires up-front work, and having your process inspected by possible insurers to get a quote is going to be expensive too
  • Maintaining the evidentiary chains is particularly hard when you are trying to modify your product in response to early feedback
  • Insurers will prefer companies with an established track record of good practice and secure software, so start-ups will have to pay higher prices

Put all of these together, and it really does add to the costs for small companies.

But suppose despite all the obstacles listed above we have our software, and its more secure. Not totally secure, of course, because there ain’t no such thing. Lets say its a web framework, along the lines of Rails or Twisted. It gets incorporated into a banking system, along with a web server, a database, an open source token-based security system and a set of front-end web pages written by a bunch of contractors being bossed about by some project manager employed by the bank. And despite everyone’s best efforts, next month a few thousand people have their accounts cleaned out. It seems they fell for a targetted trojan that performed a man-in-the-middle attack and then used a defect in the web pages to inject SQL commands that, due to another defect in the database, were able to disable the bank’s monitoring of suspicious transactions. Lawyers representing the customers, the bank, all the contractors various liability insurance companies, the database vendor and the web framework vendor are suing or threatening to sue. Industry observers think it will be a good year or two before some kind of settlement is arrived at. And what about the open source developers of the security system? It seems that the ones in the UK will be fine, but the ones in the US have already received writs. The law says that developers are only liable if they received payment for the development, so they thought they were safe. But their project web server was partly funded by a donation from the bank, and according to the database vendor’s lawyers that is enough to establish liability.

Sunday, August 12, 2007

Responses to 'Silver Bullets Incoming!'

(((
These are the responses originally posted to "Silver Bullets Incoming". Many of these responses are worth reading in their own right, and I'd like to thank the respondants for taking the time for such thoughtful posts.

Please do not post this to reddit, as it has already been discussed there under the original URL.
)))
  1. Stephen Says:

    Paul:

    I enjoyed your first article quite a bit - it got me thinking about technical language issues again (always fun).

    I’d like to comment on your update to the original article. Specifically, I have some comments regarding C++

    C++ is not an “old” language, incorporating many language features of more “modern” languages, including exceptions, automatic memory management (via garbage collection libraries and RIIA techniques), and templates, a language feature that is only available in C++, and that provides support for generic programming and template metaprogramming, two relatively new programming paradigms. Yes, C++ has been around a while, but until I see programmers constantly exhausting the design and implementation possibilities of C++, I won’t call the language “old.”

    C++ was not designed to support just OO programming: From “Why C++ Isn’t Just An Object-Oriented Programming Language” (http://www.research.att.com/~bs/oopsla.pdf):

    “If I must apply a descriptive label, I use the phrase ‘multiparadigm language’ to describe C++.”

    Stroustrup identifies functional, object-oriented, and generic programming as the three paradigms supported by C++, and I would also include metaprogramming (via C++ templates or Lisp macros) as another paradigm, though it is not often used by most developers.

    Of course, we should also keep in mind Stroustrup’s statement regarding language comparisons (”The Design and Evolution of C++”, Bjarne Stroustrup, 1994, p.5): “Language comparisons are rarely meaningful and even less often fair.”

    Take care, and have a good weekend!

    Stephen

  2. pongba Says:

    I found it so weird that, on the one hand you argue that haskell is fast( to the extend that it might be even faster than some compiling language such as C++), while on the other hand you said “where correctness matters more than execution speed its fine today”.
    Does that sound paradoxical?

  3. Another Paul Says:

    Paul:

    “I think that Dijkstra had it right: a line of code is a cost, not an asset. It costs money to write, and then it costs money to maintain. The more lines you have, the more overhead you have when you come to maintain or extend the application”

    By that measure, there’s no such thing as an asset. Think about that a moment - someone buys a general ledger or CAD/CAM system and modifies it as companies do. Either system reduces staff, provides more accurate information much more quickly, and renders the company more competitive. Take it away and what happens?

    It’s been my experience that while these systems require maintenance (and sometimes a lot) they usually result in a net reduction in staff and the cost of doing business. And some types of systems provide a clear competitive edge as well. I think that makes many systems just as much an asset as a house, factory building, or a lathe.

    Interesting article. Thanks.

    Another Paul

  4. BobP Says:

    >> An order of magnitude is a factor of 10, no less

    > Well, the Wikipedia entry does say about 10. All this stuff is so approximate that anything consistently in excess of 5 is close enough.

    0.5 orders of magnitude = power(10.0,0.5) = sqrt(10.0) = 3.1623 (approx)
    1.5 orders of magnitude = power(10.0,1.5) = sqrt(1000.0) = 31.623 (approx)

    If we are rounding off, a factor of 4 is about one order of magnitude; also, a factor of 30 is about one order of magnitude.

  5. Jeremy Bowers Says:

    You missed my point with Python, or at least failed to address it.

    My point wasn’t that Python is also good. My point was that you lept from “10x improvement” to “it must the chewy functional goodness!” But your logic falls down in the face of the fact that Python, Perl, Ruby, and a number of non-functional languages that also have a 10x improvement over C++, therefore it clearly is not a sound argument to just leap to the conclusion that “it must be the chewy functional goodness!” when there are clearly other factors in play.

    I’m not criticizing Haskell or functional programming, I’m criticizing your logic, and you’ve still got nothing to back it up.

    (This is par for the course for a claim of a silver bullet, though.)

  6. Sam Carter Says:

    “Libraries and languages are complicit: they affect each other in important ways. In the long run the language that makes libraries easier to write will accumulate more of them, and hence become more powerful.”

    This argument has a large flaw in it, namely the current state of libraries doesn’t reflect this claim. The largest and most powerful collection of libraries seem to be .NET, CPAN, and the Java libs, certainly not Lisp libraries.

    But the advocates of Lisp would argue that it’s the most powerful language, and it’s clearly been around for a long time, yet the Lisp community has not accumulated the most powerful collection of libraries. So unless the next 40 years are going to be different from the previous 40 years, you can’t really assert that language power is going to automatically lead to a rich set of libraries.

    I stand by my original comment to the previous article that programming is more about APIs and libraries than about writing their own code, and that if you are focused on measuring code-writing performance, you are just measuring the wrong thing.

    I also disagree with the claim that this is unmeasurable because doing a real-world test is too expensive. As long as the project is solvable in a few programmer-weeks, you can test it out with different languages. I took a computer science class (Comp314 at Rice) where we were expected to write a web browser in 2 weeks. It wouldn’t be that hard to have a programming test which incorporated a database, a web or GUI front end, and some kind of client/server architecture, e.g. implementing a small version of Nagios, or an IM client, or some other toy application.

    I’m sorry but writing a command line application that parses a CSV file and does some fancy algorithm to simulate monkeys writing Shakespeare is about as relevant to modern software engineering as voodoo is to modern medicine.

  7. Paul Johnson Says:

    pongba:

    I’m arguing that Haskell programs are faster to *write*. Execution speed is a much more complicated issue. FP tends to lose in simple benchmarks, but big systems seem to do better in higher level languages because the higher abstraction allows more optimisation.

  8. Paul Johnson Says:

    Another Paul:

    The functionality that a package provides is an asset, but the production and maintenance of each line in that package is a cost. If you can provide the same asset with fewer lines of code then you have reduced your liabilities.

    Paul.

  9. Paul Johnson Says:

    Jeremy Bowers:

    Teasing apart what it is about Haskell and Erlang that gives them such a low line count is tricky, because it is more than the sum of its parts. One part of it is the high level data manipulation and garbage collection that Python shares with functional languages. Another part of it is the chewy functional gooodness. Another part, for Haskell at least, is the purity. OTOH for Erlang it is the clean and simple semantics for concurrency.

    What I see in the results from the Prechelt paper is that Python was, on average, about 3 times better than C++ while the average Haskell program (from a sample of 2) was about 4 times better. Actually the longer Haskell program was mine, and I was really embarassed when someone else showed me how much simpler it could have been.

    In terms of pure line count I have to conceed that Python and Haskell don’t have a lot to choose between them. A 25% improvement isn’t that much. Its a pity we can’t do a controlled test on a larger problem: I think that Haskell’s type system and monads are major contributors to code that is both reliable and short. Unfortunately I can’t prove it, any more than I could prove that garbage collection was a win back in the days when I was advocating Eiffel over C++.

    Paul.

  10. Paul Prescod Says:

    If you cannot “tease apart” what it is about Haskell and Erlang that makes them so productive then you cannot say that any one improvement is a silver bullet. It just feels truthy to you. Furthermore, if you are presented with counter-examples in the form of Python and Ruby then surely you must discard your thesis entirely. The best you can say is that there exist non-functional languages that are 10 times less productive than some functional languages for some projects.

    Duh.

  11. Paul Johnson Says:

    Sam Carter:

    On languages with expressive power gathering libraries; point mostly conceeded, although Perl certainly is a very expressive language, so I don’t think it supports your point, and .NET has Microsoft paying its mongolian hordes, so its not a fair comparison.

    There are two sorts of libraries: general purpose ones (e.g. data structures, string manipulation, file management) that get used in many applications, and vertical libraries (HTTP protocol, HTML parsing, SMTP protocol) that are only useful in specific applications. There is no hard dividing line of course, but the usefulness of a language for general purpose programming depends on the language and its general purpose libraries. The vertical libraries have a big impact for applications that use them, but not elsewhere. So I would generally judge a language along with the general purpose libraries that are shipped with it. The special purpose libraries are useful as well, but its a secondary consideration.

    Paul.

  12. Paul Johnson Says:

    Sam Carter (again):

    Sorry, just looked back at your post and realised I’d forgotten the second point.

    A worthwhile test is going to take about 10 versions to average out the impact of different developers. So thats 2 weeks times 10 coders is 20 developer-weeks, or almost half a man-year. Say a coder is on $30K per year and total cost of employment is three times that (which is typical). Round numbers $40-50 per language. Ten languages will cost the best part of half a million dollars to evaluate. Not small beer.

    Of course you could use students, but on average they will know Java or Javascript better than Python or Haskell, so how do you correct for that?

    Paul.

  13. pongba Says:

    I’m arguing that Haskell programs are faster to *write*. Execution speed is a much more complicated issue. FP tends to lose in simple benchmarks, but big systems seem to do better in higher level languages because the higher abstraction allows more optimisation.

    I always hear people saying that, but I really don’t get it.
    I know that *theoretically* abstraction( or non-side-effect, etc) gives more opportunity for optimization, but I have never seen people show some real data that can *really* prove it.
    One question constantly annoys me - If higher-level of abstraction allows more optimization, then why .NET put the burden of discriminating value-types and reference-types on us programmers. Shouldn’t the referential-transparency-ness be better at this?

  14. Jonathon Duerig Says:

    I have two specific (and one general) criticisms to make about your line of argumentation:

    First, I think you do not adequately address the criticisms about lines of code as a metric. The cost of a line of code is the sum of five factors: (a) Difficulty of formulating the operation involved (original coder*1), (b) Difficulty of translating that operation into the target programming language (original coder*1), � Difficulty of parsing the code involved to understand what the line does (maintainer*n), (d) Difficulty of later understanding the purpose of that operation (maintainer*n), and (e) Difficulty of modifying that line while keeping it consistent with the rest of the program (maintainer*n).

    (a) and (b) are done only once, but �, (d), and (e) are done many times whenever the program needs to be fixed or modified. Brooks’ argument was specifically that in the general case the time for (a) is more than 1/9 the time for (b), and the time for (d) is more than 1/9 the time for � and (e). This is important because (a) and (d) are both language and tool independent.

    When comparing the lines of code from different languages, it is important to realize that the formulation of the operations and the understanding of purpose are spread across those lines. And the verbosity of the language usually doesn’t impede either of these problems (unless it is extreme).

    For instance, take the creation of an iterator or enumeration in C++ or Java respectively and compare that to creating a fold function in Scheme. These are roughly equivalent tasks. In C++, an iterator is defined first by defining a class with various access operators like * and -> and ++ and — and then implementing them. This adds a lot of baggage because there are half a dozen or so functions that must be defined and there is a separate class specification. In constrast, a scheme fold function is much simpler from the language perspective. A single function is defined rather than half a dozen. It will almost certainly have fewer lines, possibly by 4 or 5 times.

    But let us look at what the creation of the iterator or fold function means from the perspective of items (a) and (d). Both of these are common idioms in their respective languages, so all of the code specifically dealing with iteration/folding is trivial to conceptualize and trivial to understand the purpose of. The difficulty in writing either a custom iterator or a custom fold function lies within the subtleties of the iteration. If it is a tree, what information needs to be maintained and copied to successive iterations (whether that be in the form of state, or in the form of argument passing)? Are there multiple kinds of iterations? How would they be supported? (For example, sometimes a user wants to traverse a tree in pre-order, sometimes in post-order, sometimes in-order, and sometimes level by level in a breadth-first order.) These are the questions which the original coder and the later maintainers will have to contend with. And these are really orthogonal to lines of code counts.

    But there is another factor at work here which makes lines of code a faulty cross-language measurement. Every language has a grain to it. If you program with the grain, then any difficulty will be easily solved by the tools in the language. If you program against the grain, then you will run into difficulty after difficulty. This applies to fundamental language properties. You can bypass the type system in C++ and avoid all the type checks, but it is cumbersome and unpredictable if you do it wrong. Ruby allows you to be much more flexible with types and provides a safety net. If you try to enforce a more strict typing in Ruby, then you will have to fight the language every step.

    But the grain of the language also includes the question of scale. Some languages provide a lot of flexibility. They allow compact and loose representations of programs which can be customized to the problem domain easily. These languages include Scheme and Ruby and Haskell. These flexible languages are very useful for small projects with one or a few developers because they can be metaphorically molded to fit the hand of the person who wields them. But there is a trade-off because they tend to be more difficult to use in large groups because it is harder for others to undestand what it going on. This is a fundamental trade-off that programming languages must make. And it means that a language which is great at one end of the spectrum will likely be lousy at the other end. And this is reflected in the lines of code required for a particular scale of problem.

    My second criticism is in regard to your discussion of state. You point out that Brooks considered managing of state to be a major essential difficulty of programming and you then claim that functional languages obviate this difficulty and hypothesize this as the reason that they can be a silver bullet.

    I believe that you have misunderstood the kind of state the Brooks was referring to. He was not talking about run-time state but compile-time state. He was not talking about what variables are changed at run-time. He was talking about the interactions between components of the program. These interactions are still there and just as complex in functional languages as in imperative languages.

    Second, even when considering just the run-time state, the referential transparency of functional languages simplifies only the theoretical analysis of a program. As far as a normal programmer who is informally reasoning about what a program does, the programmer must consider how state is transformed in the same way whether or not a modified copy is made or a destructive write is made. This is the same kind of reasoning.

    Finally, I have seen many people talk about getting an order of magnitude improvement by finding some incredible programming tool. Functional programming is not unique in that respect. But in my experience this is more likely to be about finding a methodology that suits the persons mindset than about finding the one true language or system. Somebody who thinks about programming in terms of a conceptual universe that changes over time will be an order of magnitude less effective in a functional environment. And somebody who thinks about programming in terms of a conceptual description of the result which is broken up into first class functions will be an order of magnitude less effective in an imperative environment.

    I have programmed in both imperative and functional languages. I know and understand the functional idioms and have used them. My mindset tends to the empirical. I am a less effective programmer in such languages. But I have seen programmers who can pull a metaphorical rabbit out of a hat while tapdancing in them. This says to me that evangelism about functional languages or empirical languages is fundamentally misguided regardless of the side.

  15. Paul Johnson Says:

    Jonathon Duerig:

    I had decided not to respond to any further comments and instead get on with my next article. But yours is long and carefully argued, so it merits a response regardless. Its also nice to be arguing the point with someone who knows what a fold is.

    You make the point that during maintenance the difficulty of later understanding the purpose of an operation is language independent. I’m not so sure. A maintainer may suspect that a C++ iterator is truly orthogonal, but it can’t be taken for granted. There may be a bug hiding in those methods, or perhaps someone fixed a bug or worked around a problem by tweaking the semantics in an irregular way. Also a lot of the understanding of a piece of code comes from context, and it helps a lot to be able to see all the context at once (why else would 3 big monitors be a selling point for coding jobs?). So terse code makes it a lot easier to deduce context because you can see more at once.

    (Aside: I remember in my final year project at Uni going into the lab at night because then I could get two VT100s to myself).

    You say that Scheme, Ruby and Haskell can be moulded to fit the hand of the user, making them more productive for single person tasks, but less productive for groups because of mutual comprehension difficulties.

    This is hard to test because of the lack of statistics, but Haskell is strongly typed and the community has already developed conventions and tools for documentation and testing (Haddock and QuickCheck). I can see that Scheme macros can be used to construct an ideosyncratic personal language, but I really don’t see how this could happen in Haskell. Things that get done with macros in Scheme are usually done with monads in Haskell, but whereas Scheme macros are procedural monads are declaritive and must conform to mathematical laws, making them tractable. My experience with Haskell monads is that you generally build a monadic sub-language in a single module and provide libraries for it in some other modules (e.g. Parsec), and that the end result is intuitive and simple to use. But maybe I’ve only been exposed to well-designed monads.

    On the subject of state and informal reasoning: personally I use whatever reasoning forms that will work. In debugging a particularly complex monad I once resorted to writing out the algebraic substitutions long-hand in order to understand how the bind and return operators were interacting. It worked, and I got the monad to do what I wanted. I routinely use informal algebraic reasoning of this sort in simpler cases in order to understand what my program is doing. Any informal reasoning must be a hasty short-hand version of what a full formal proof would do, and it follows that language features that make full formal proof easier will make the informal short-hand mental reasoning easier too.

    Pure functions are particularly valuable when trying to understand a large program because you don’t have to worry about the context and history of the system for each call; you just look at what the function does to its arguments. In a real sense this is as big a step forwards as garbage collection, and for the same reason: any time you overwrite a value you are effectively declaring the old value to be garbage. Functional programs (at least notionally) never require you to make this decision, leaving it up to the GC and compiler to figure it out for you based on the global system context. Thus complex design patterns like Memento and Command are rendered trivial or completely obsolete.

    Finally you talk about the many over-hyped technologies in this industry. Yes, hype is a common problem. Those of you who think you have a silver bullet are very annoying for those of us who actually do. :-)

    Paul.

  16. Paul Johnson Says:

    Jonathon Duerig:

    I had decided not to respond to any further comments and instead get on with my next article. But yours is long and carefully argued, so it merits a response regardless. Its also nice to be arguing the point with someone who knows what a fold is.

    You make the point that during maintenance the difficulty of later understanding the purpose of an operation is language independent. I’m not so sure. A maintainer may suspect that a C++ iterator is truly orthogonal, but it can’t be taken for granted. There may be a bug hiding in those methods, or perhaps someone fixed a bug or worked around a problem by tweaking the semantics in an irregular way. Also a lot of the understanding of a piece of code comes from context, and it helps a lot to be able to see all the context at once (why else would 3 big monitors be a selling point for coding jobs?). So terse code makes it a lot easier to deduce context because you can see more at once.

    (Aside: I remember in my final year project at Uni going into the lab at night because then I could get two VT100s to myself).

    You say that Scheme, Ruby and Haskell can be moulded to fit the hand of the user, making them more productive for single person tasks, but less productive for groups because of mutual comprehension difficulties.

    This is hard to test because of the lack of statistics, but Haskell is strongly typed and the community has already developed conventions and tools for documentation and testing (Haddock and QuickCheck). I can see that Scheme macros can be used to construct an ideosyncratic personal language, but I really don’t see how this could happen in Haskell. Things that get done with macros in Scheme are usually done with monads in Haskell, but whereas Scheme macros are procedural monads are declaritive and must conform to mathematical laws, making them tractable. My experience with Haskell monads is that you generally build a monadic sub-language in a single module and provide libraries for it in some other modules (e.g. Parsec), and that the end result is intuitive and simple to use. But maybe I’ve only been exposed to well-designed monads.

    On the subject of state and informal reasoning: personally I use whatever reasoning forms that will work. In debugging a particularly complex monad I once resorted to writing out the algebraic substitutions long-hand in order to understand how the bind and return operators were interacting. It worked, and I got the monad to do what I wanted. I routinely use informal algebraic reasoning of this sort in simpler cases in order to understand what my program is doing. Any informal reasoning must be a hasty short-hand version of what a full formal proof would do, and it follows that language features that make full formal proof easier will make the informal short-hand mental reasoning easier too.

    Pure functions are particularly valuable when trying to understand a large program because you don’t have to worry about the context and history of the system for each call; you just look at what the function does to its arguments. In a real sense this is as big a step forwards as garbage collection, and for the same reason: any time you overwrite a value you are effectively declaring the old value to be garbage. Functional programs (at least notionally) never require you to make this decision, leaving it up to the GC and compiler to figure it out for you based on the global system context. Thus complex design patterns like Memento and Command are rendered trivial or completely obsolete.

    Finally you talk about the many over-hyped technologies in this industry. Yes, hype is a common problem. Those of you who think you have a silver bullet are very annoying for those of us who actually do. :-)

    Paul.

  17. Toby Says:

    Since I happened to stumble upon an actual Dijsktra cite just now, I thought I’d add it here (having read and appreciated your original post a few days ago).

    In EWD513, “Trip Report E.W. Dijsktra, Newcastle, 8-12 September 1975,” he writes,

    “The willingness to accept what is known to be wrong as if it were right was displayed very explicitly by NN4, who, as said, seems to have made up his mind many years ago. Like so many others, he expressed programmer productivity in terms of ‘number of lines of code produced’. During the discussion I pointed out that a programmer should produce solutions, and that, therefore, we should not talk about the number of lines of code produced, but the number of lines used, and that this number ought to be booked on the other side of the ledger. His answer was ‘Well, I know that it is inadequate, but it is the only thing we can measure.’. As if this undeniable fact also determines the side of the ledger….”

    That is the edited version as printed in “Selected Writings on Computing: A Personal Perspective”. The original text can be found in the EWD Archive, at http://www.cs.utexas.edu/users/EWD/transcriptions/EWD05xx/EWD513.html