Monday, 2 July 2012

The Trouble with our "Piggy" Banks (and our computers)



We are having serious problems with our banks. In recent weeks the bank that I use decided to make a change to its computer system, and got it very badly wrong. I was lucky, as I had no transactions going through during the crisis week. On the other hand a very large number of people found that their salary had not been paid, or could not withdraw desperately needed funds. For instance some people had house purchases collapse - leaving them homeless - because their solicitor could not transfer the payment. At least one person spent a weekend locked in a cell because the Court could not cash his bail cheque.

Since then there has been the scandalous way in which another big bank was able to make bigger profits, and its staff get bigger bonuses, by manipulating the LIBOR rate. This is the interest rate that is used when banks borrow money overnight and of course everyone assumed that it was accurate and trustworthy because it is calculated on a computer. Except that the computer depends on being fed accurate data and it turns out that bankers who wanted to maximize their bonuses could, with the aid of friends in other banks, manipulate the figures, and had been doing in on a regular basis for years.

There are many other problems related to mis-selling of insurance products, to the threat caused by the uncertainty of the euro, and perhaps some even bigger shocks to the system.  But don't worry, the good old Bank of England has the answer. We are reassured that it would not sink so low as to print more money' All it does is to add a few trillion to the numbers in the Bank's computer and call it "Quantitive Easing." This must the "OK" because it is all in a computer, we can't seen huge piles of extra bank notes, and our minds can't really comprehend numbers so big.

If we pause for a moment to look for a common factor we will find that virtually everything associated with banking (and many other aspects of our life) involves the computer. It is easy to this is because computers turn up everywhere so they cannot be blamed - but let us brain storm around the idea that computers have at least aggravated the situation.

For a computer to work someone has to write a program which must accurately reflect, in advance, the tasks the computer is to preform. The first computers were built to process large volumes of numbers in clearly predefined ways because humans could not do such tasks quickly and reliably – and the results were very successful – despite being very difficult to program. Success bred success and it was found that many repetitive tasks could be precisely defined, and that it was possible to construct tools to simplify the construction of new applications. As some one who was born before the first computers existed I can fairly say that no one could have predicted the great effects computers have had on every day life, and the advances we have made in our understanding of science and the universe in which we live.

So the theoretical model tells us that computers are no better than our ability to accurately predict future requirements . For many tasks which involve dynamic interaction with people in the real world it is not possible to predict all the theoretically possible situations that could arise. This means that there will be situations where the computer can no longer adequately support its human users because something has happened which is outside its specification (or because the specification is defective).  


We are all be aware of how, on occasions our personal computer comes up with a "meaningless" error message and refuses to continue - or looses some of the information we have typed in. In bigger systems, such as the big bank systems crash, mentioned at the beginning of the blog, the front line staff suddenly find that they can no longer provide a proper customer service. Bigger crashes are perfectly possible and I wonder how robust most bank systems are if we had a major depression and at the same time the euro collapsed and the countries involved returned to their old currencies. 


Another problem is that modern computers are the ultimate black box - as there is no way that the user can really know what the processor is doing for him - as there is not usable communication route. Most of the time the user has to take what the computer does on trust - which is why the manipulation of the LIBOR rate escaped notice - and why other dishonest activities - such as viruses - can hide themselves within the black box. It also means that when something goes wrong the system is "fail dangerous" is that at the time of the difficulty the ultimate users of the systems (clerks and customers in the case of banks) are left with a useless system.


Of course, I can hear you say, a vast amount of money is spent on belt and braces safety systems to catch problem as early as possible, and to safeguard data from corruption. In fact my first computer publication was a company report explaining the recovery problems when, in the late 1960s, organisations started to move from batch systems using magnetic tape to simple online systems using direct access storage.  The difficulty is that the bigger the system, and the more it and its human users are cocooned to from failure, the more likely things are to go badly wrong when the unanticipated happens and humans are called in to do a "quick fix". These leads to discussions as to whether, in some cases, such as landing planes, or controlling safety critical plants such as nuclear reactors, the human should be entirely written out of the system!  

It is very easy to simply sit back and accept the situation. People and societies have always adapted to live successful new technology and its limitations. For instance one of the side effects of the steam engine was the urban factor and the slum housing for the workforce. Where there are alternative technologies the one that becomes accepted tends to be the first on the market rather than the bests technical solution. In Victorian times Brunel tried to introduce a wider gauge for railway lines - which was more stable and could carry heavier loads, but it had to be abandoned to conform with earlier standards. More recently it is said that Betamax was better than VHS - but lost out because VHS got in first and captured the bulk of the market.


So could there be an alternative technology to the stored program computer which has not been considered because the computer is considered the best thing since sliced bread - and as a result "it is quite obvious" that there is no need to look for an alternative.?


When work started on computers in the 1940s a stampede started to develop the best automatic calculating machines to carry out highly repetetive operations which people found difficult to do. In the rush to make money and build careers no-one stopped to do blue sky research on alternative types of information processing machines. There was no problem if the computer was not powerful enough because in a few years there would be much faster machines with even more memory. If the task proved too difficult just employ more specially selected "logical thinkers" to define and write the programs, and then to write software to make it easier to write programs. By the time Xerox Parc started to think about getting humans involved via a graphical interface it was taken for granted that the stored program computer was the best foundation for building systems to interact with people - with governments deciding that school children should be taught about programming, because it was a difficult but important skill.


So is there an alternative technology that has been overlooked because computers were so successful, and because so much money and so many careers are committed to preserving the status quo?


The stored program computer approach starts with an approach which allows mathematicians to automatically procesess numbers according to predefined algorithms, using numbers to act as commands and addresses. One alternative approach is to look at a system which automatically processed words, using words to act as commands and addresses - and where the task could grow incrementally rather than be defined in advance. 

O.K. - I am biased in suggesting such an alternative because in 1967 I stumbled on an alternative people friendly approach while working on a major commercial data processing system - and this lead to the development of an alternative approach called CODIL which was showing promise when I had to drop it because of the strain of being "out of step" with the computer science establishment, plus heath related problems. More recently I suspect that the approach throws light on how the brain works.

Of course you may reject what I am trying to do because you believe (can you scientifically prove it?) that the stored program computer approach, on which our banking systems (and many others) depend is the only possible information processing model. If you agree that the rush to build computers bypassed a proper look at the alternatives I am not asking you to accept my approach - which is far from complete - but rather suggesting that you give some serious thought to the limitations of existing computers and what the possible alternatives are.

1 comment:

  1. A few days after I posted this a summary (http://www.newscientist.com/blogs/onepercent/2012/07/af447-final-report.html) of the BEA report (http://www.bea.aero/fr/enquetes/vol.af.447/recos05juillet2012.en.pdf) of the crash of Air France 447 in the South Atlantic was published.

    My interpretation of the computer prolems on reading the report are :

    The worse thing that can happen to a plane is to stall, and when the crisis started the plane had plenty of height to dive and gain speed - but due to confusing messages and lack of relevant training it would seem that the computer alarm systems were sending the wrong messages (or too confusing messages) and the pilots apparently failed to realize that the plane was falling from the sky because they were going too slowly.

    In fact if the planes forward air speed over the wings dropped below 60 knots the stall alarm stopped - presumably because the computer system designer assumed that if its forward speed was so low it could not be in the air! But it would seem that the plane was falling almost vertically in nose up position. All the pilots needed to do was put the nose down into a dive, the air speed over the wings would increase and they would escape from the stall.

    ReplyDelete