Sunday, November 14, 2010

 

Testing times

Interesting piece in yesterday's DT about how the Deepwater disaster was the same sort of thing as Chernobyl. That is to say that in both installations the controlling and experienced engineers drove through many layers of red lights. Red lights coming from the well designed testing protocols. Despite all their diplomas, they chose to ignore all the red lights and crash on forward, in both cases with catastrophic results. The writer described it as a form of hubris.

Which moved me to ponder about the testing regimes we had on the computer systems we were working on when I eased out of the world of work at the Home Office. Testing was a rather tedious business which lots of people did not want to get mixed up in. Heads of IT saw it as a good game for pedants who would insist on crossing every last 'i' and documenting every last 't'. For a sort of geeked up version of traffic wardens, who will not be pushed around by project managers or anybody else anxious to get the show on the road. Project managers tend to flavour a more flexible approach. Microsoft, who were, as it happens, doing some development work for us at the time, struck me as being good at it. Perhaps because they were applying testing regimes developed for big products like Word to the much more modest products being put together for the Home Office.

The bit we got right was that there should be a testing function, quite separate from the constructing function. But as it happened, for one reason or another, this testing function was let out to contractors, rather than having an in-house team. Perfectly proper and professional people, but with a natural interest in maximising the amount of testing that is done at any one time. They would say in their defence, that if you want me to sign off on this mission critical system you are going to have to give me the time (that is the money) to do it properly.

The difficulty is to get the management regime right. Does one need a permanent, in-house function, all tooled up and ready to take on whatever the rest of the IT department can throw at it? Do you have enough going on to justify the large expense involved? To whom should the testing function report? Because if the testing function reports to the project manager, this last will need to be very disciplined not to take a flier from time to time. To go for getting the product on the road rather than getting the product right. But if testing reports too high up the chain, management is apt to give the testers too much of a free hand. They and their work can grow like a cancer. From on high in the management chain, supposing they bother at all, it can be hard to make the right judgement about this. Are the testers feathering their nest or are real problems with the product?

Clearly, the engineering chaps I started with got it wrong. Indeed, the impression given is that the engineers who did the day to day running were the same engineers as those who did the tests and made the decisions. Maybe a better division of powers was called for. I remember from my days of concrete cubes, a world when one bunch of engineers supervised the building of the bridge, another bunch of engineers watched them and a third bunch worried about the quality of the concrete on their behalf. A lot of engineers, but the number of collapsing bridges was modest, if not quite zero.

More parochially, following the recently reported whizzes and pops from the PC, have now done a defrag.. Took a few hours and I have had one crash on boot since, but I do seem to be starting up quicker with less whizzes and pops, so perhaps it was worth the bother.

Comments: Post a Comment



<< Home

This page is powered by Blogger. Isn't yours?