Saturday 21 April 2012

The Limitations of the Stored Program Computer

Brain Storms - 10

I often get asked why I suggest that there is something special about CODIL and the easiest way to explain it is to discuss the use of models to represent real world information. My scientific training was in the field of chemistry – with a particular interest in the the theoretical prediction of chemical properties. As a chemist I was very used to having a range of modelling techniques to chose between.



None of the models did everything – and different types of model served different purposes. If you had a problem and found the model you were using was inadequate it was quite natural to look to see if a different model was available – or whether you could find a better way of representing the problem.


Instead of thinking about chemical compounds let us think of the problem of modelling the processing of information and what the options are. Of course computers allow us to build a great variety of information processing models – but all reduce to the same model at the “atomic” level. Normally this is described as an array of numbers (the data) plus a subset of which are interpreted as a program, which uses numbers to address the array. However there is one essential component is usually overlooked in theoretical discussions by which needs to be considered in the general case. This is the intelligent human designer who constructs the program. In many cases the resulting working model may also need to interact with other humans – who provide or use data operated on by the program. This post is concerned with information  models that include human involvement.


So let us consider a vast application which seems almost impossible to implement and consider it in the light of the extended model which includes human designers and human users operating in a dynamic real world context. As I live in England the obvious area to chose is the attempt to provide a single integrated system to handle patients, their records and their treatments in the National Health Service.

For a perfect system – which could cope with every eventuality one would need to know all the possible types of information generating events that could take place, such a doctor prescribing a drug, a patient undergoing an operation, an epidemic among medical staff, the report of a new resistant bacterium, the introduction of a fundamentally different type of diagnostic equipment, the decision not to spend money of preventative treatment due to shortage of funds, a major release of radioactivity affecting thousands of people, a government decision to change the way services are managed, etc., etc., etc.

Of course we will never have a perfect system as we don't have infinitely intelligent human designers with total knowledge of all possibilities – but if we did we would find some types of events occurring a million times or more a day, while there might be a million freakish but theoretically possible events which individually might only occur once in a million years. We are faced with making resource controlled compromises in what matters is whether the number of times a type of event occurs is sufficiently high to justify the costs of the human designer identifying the event and specifying how it is to be handled. These costs will need to include the training costs of the users of the system, and reflect their ability to work with it. Once you have a working system there will be events which occur that were not catered for and possibly not anticipated. In some cases it may be easiest to simply ignore the unexpected, but in other cases action may be needed which have to be undertaken outside the system – as the whole “black box” design of stored program computers means that there is no way there can be useful interaction between the user and the system once the task moves outside the predefine boundaries.

So back to the original topic of models. Is there an alternative information processing model which reduces the need for prior knowledge and pre-definitions (reducing the costs of gathering and recording such information)? Is there also a model which allows the system and its human user to work together symbiotically when faced with a situation neither had anticipated?

As far as I can see no-one ever set out to do the relevant blue sky research in the early days of computing. Immediately after the war it was clear that there was a very large market for programmable electronic calculating machines. There was a mad rush among researchers and manufacturers to be among the first to exploit the new technology – and anyone who stopped to "waste" time consuming basic research would be left out in the rat race for prestige and money in the new invention. A continual stream of faster processors and larger memories made it easier to write bigger and bigger programs without stopping to ask if bigger automatically meant better. By the 1960s Xerox Parc was asking how to help the human user and they took is as self-evident that the stored program computer architecture was the place to start – and to be fair they did a good job in trying to hide some of its inadequacies.

But if you stop to think about it there must be a fundamentally different model because nature got there first. A human baby must start of with no advanced knowledge of the world yet it can bootstrap itself up to become a significant information processing machine.

This is where my work on CODIL comes in. Before I joined Shell Mex & BP in 1965 I had been concerned with the problems of handling very complex information manually and using it to provide a manual management information service. I was well aware that situations could arise that no-one had anticipated and that sometimes these “exceptions” were of vital importance to the future of the company. My first experiences with the massive SMBP computer operations mad me realise that most of the problems were due to communication failures between the management and the computer. This meant that when I was asked to look at their complex sales contracts system my design priorities were to provide good two-way communication between management and the computer, coupled with dynamic flexibility in dealing with sales contracts in an ever changing market place. The very idea that anyone would try and pre-define all the possible types of sales contract never occurred to me, and I was unaware that such a pre-definition was what everyone in the industry was expecting me to do. Pre-definition not only defines what you can do – it also puts rigid limits on what you can't do – and I believed that giving management the maximum flexibility to act quickly and decisively was of paramount importance. (More information on what I proposed.)

I then moved to English Electric Leo Marconi to look at what top of the range computer users would want to buy in a few years times – and after a few months drafted a note suggesting a rethink about the computer processor architecture needed for complex real world taks involving human interaction. The idea was backed by Pinkerton, Caminer and Aris and the CODIL project is born. Papers describing the project are listed elsewhere, but the important differences (looking at the project in retrospect related to the question:

Which came first, the program or the data?
  • Conventional programming involves a top down approach where one starts with a global definition ot the task in hand- putting "the program" first. CODIL is a bottom up approach starting from the individual transactions and uses a very simple algorithm to make deductions from the information it has (in conventional terms generating micro programs on the fly). In effect if puts the "data" first although it is better simply to drop the distinction. To someone used to conventional programming it sounds impossible – but it demonstrably works!
  • Conventional programming works with numbers – and data is, at the lowest level – addressed by number. CODIL works with set names – and all addressing is done associatively by set name, rather than by location.
  • The conventional computer architecture is a black box while CODIL provides a white box approach which allows it to communicate with users in the appropriate application terminology at the (conceptually) hardware level
Unfortunately the work was axed without proper assessment when ICL closed the English Electric Leo Marconi Research Labs in 1970 but continued, with inadequate funding or support for some years. By 1980 it was taken to the point where it could be used to support interactive classes of up to 125 students, and support a number of other applications. In 1985 a well reviewed demonstration version was produced on a BBC microcomputer (the key code was only a few hundred bytes long because the underlying idea is so simple) to show that it could be implemented on a single chip if only money was available. While a paper describing this stage of research appeared in the ComputerJournal in 1990 the project had already closed down due to lack of funding and health related reasons - which are also the reason why the product was not actively marketed.

Recently interest in the research has been revived (but so far with no funding or even a proper home) and it seems that the model could run on a network of neurons and might well be a first short at how the human brain bootstraps itself up and processes complex information processing tasks. (See earlier Brain Storms)

What happens next really depends on you ....

If you think the approach is fundamentally wrong please read the reviews to see what a large number of independent people were prepared to say about the system in public - and then add you reactions as a comment. The software is still available if you have access to a BBC computer.

Earlier Brain Storms
  1. Introduction
  2. The Black Hole in Brain Research
  3. Evolutionary Factors starting on the African Plains
  4. Requirements of a target Model
  5. Some Factors in choosing a Model
  6. CODIL and Natural Language
  7. Getting rid of those pesky numbers
  8. Was Douglas Adams right about the Dolphins
  9. The Evolution of Intelligence - From Neural Nets to Language

No comments:

Post a Comment