Tuesday, 19 September 2017

Trapped on a warming planet

Trump fiddles while the world burns
The news on the BBC web site this morning was not good. Hurricane Maria is tearing up the West Indies only weeks after Hurricane Irma. More lives will be lost, homes and livelihoods will be destroyed, and the worsening climate will claim more victims.

On another page there is a report that this winter in Australia has been the hottest ever with over 260 heat and low rainfall records being broken - suggesting that there could be a record number of bush fires in the summer.

At a less serious level the last patch of snow is about to vanish from a location in Scotland where normally it remains all the year round.

These events come as no surprise to me. In July 1990 I joined the CSIRO in Australia for a year, based in North Ryde, Sydney. My first job was to look through a pile of research papers - and the first one was explaining why, as the world warmed under a man-made blanket of carbon dioxide, we could expect bigger hurricanes. The idea was to set up an information system which followed the latest climate change research, and mad the information available, in an easy to understand way, to the politicians and government of Australia.

A Postcard from 1908
"Did the system you produced have beneficial effects on the world's climate", I hear you ask. In fact the problem is that Trump was not the first climate denier (just the most dangerous) and I had only been working for a couple of months when the project was deemed unnecessary and I was moved to produce a prototype environmental data base for Australian Heritage.

So is there any good news? A recent scientific article published in Nature Geoscience concludes 
limiting warming to 1.5°C is not yet a geophysical impossibility, but is likely to require delivery on strengthened pledges for 2030 followed by challengingly deep and rapid mitigation.  
What this actually tells us is that hurricanes are going to get even bigger, Australia will get even hotter, and many other major changes will take place. However if everyone (including Trump and the USA) took the challenge seriously things will not be quite as bad as they would have been if we did nothing. In effect we (and our children and our children's children) are all going to suffer because politicians worldwide didn't take the issue seriously enough 25 years ago.

What is happening to General Practices in the NHS

Over 50 years ago the family moved to Tring, and for all this time we have been registered with a practice run by a single doctor who we got to know and who got to know us. But the NHS is changing and on October 1st the practice we are registered with will be merged with the very large surgery at the other end of town. Having been on a number of health committees are a public representative I understand the pressures on the NHS and the knock on effects on patients.
I was therefore delighted to see that one GP practice had produced an excellent video of what is actually happening to General Practice in the NHS.
Click for full video

Thursday, 14 September 2017

My personal battle between complex and complicated systems

I recently decided to drop into a FutureLearn course "Decision Making in a Complex and Uncertain World" by the University of Groningen. The opening section really made me sit up as I realized that I had never seriously thought about a formal definition that clearly distinguished between complex and uncertain systems and complicated  but predictable ones. Of course I was well aware of the difference in practice but having a definition clarified a number of issues relating to how my research into a human-friendly computer (CODIL) started, why the research came to be abandoned, and why there is now renewed interest in the subject.
Fossil Elephant Tooth

Temperamentally I am a scientist who is attracted to trying to get an understanding of complex and uncertain "real world" systems and complicated predictable technology-based systems bore me. After six years studying Chemistry, and ending up with a Ph. D. in Theoretical Chemistry the last thing I wanted to be was a narrow-minded specialist knowing an enormous amount about very little. In fact while I was studying Chemistry my spare-time was spent on a complex task - trying to understand the geomorphology of some nearby limestone caves.  This involved the interaction of various agents - which operated over timescales varying from a day (such as a major flash flood) to hundreds of thousands of years (such as the Ice Ages).  As the landscape changed the key areas were typically those that were eroded away - so that the detailed evidence that would be needed for any precise reconstruction did not exist.

My first job was a complex task and involved monitoring (and also indexing) the internal research and development correspondence between a UK research organisation and its overseas branches. The task involved exception reporting and ensuring that the right people in the organisation knew of significant developments. One day there might be a complaint from an Australian customer that our product didn't work - which was the first we knew of a new form of insecticide resistance. A problem with a cattle tick trial in South Africa might be followed by a rumour about a competitor's product picked up by a salesman in a bar in South America, Then the United States would bring out new regulations for pesticide residues. There were also significant problems in filing and indexing the information because we couldn't know in advance which projects would expand enormously to become new market leaders, and which would be abandoned at an early stage of the research.

In 1965 I felt it could be useful to know more about computers and whether they could help in the work. While I had not yet read Vannevar Bush's 1945 essay "As we may think" (which in effect predicted something like Wikipedia) I was definitely thinking about computers supporting a network that allowed information to be shared. My employer was not interested and I decided to switch my career and applied to a local computer centre to become a systems analyst. In fact the system run by Shell Mex & BP centre at Hemel Hempstead was probably one on the most advanced commercial batch processing systems in the UK at the time.

LEO III - and early commercial computer
The plan was that I should start by doing a short spell programming their LEO III computers- and I found I had moved from working in a complex system where the unexpected was important to a complicated system where everything was meant to be pre-defined. To me programming was a boring task made worse because of the time delays due to the (by modern standards) crude computer facilities. However a major incident on my first day demonstrated that complexity could be found when you considered the programs as part  of the whole company trading in the real world. A major update of the customer master records had just taken place and about a million new record cards had been printed. As soon as details of new sales started to come in it became clear that many thousands of active customer records had been incorrectly deleted. It turned out that sales staff had not realized how the change affected them and had failed to provide additional information on a particular class of customer. As a result of this incident I took a particular interest in why programs failed and how errors could be minimized. In nearly every case the problem could be linked to an error or misunderstanding in the communication chain between the sales department and the actual program code.
Contemporary B P Garage

Let me give one example. My duty was to write the program which created a sales data base for the garages which sold or used our petrol and lubricating oils. Other people in the team wrote programs which used this data to produce sales reports for senior management and for each individual salesman. The first live run resulted in two errors being reported. One salesman reported that one of his customers would never have brought so much of a particular brand of lubricating oil, while Head Office reported that credit sales for one brand of diesel fuel were 17% too high. There was no problem with identifying the program errors - but for me these two reports demonstrated a major weakness in the system. Both these errors should have generated a large number of error reports. A large number of sales staff had either not spotted the errors, or at least not reported them. It was also clear that the trial data supplied for testing purposes had failed to represent the true variation between customers that occurred in the real world.

After about a year on the programming floor I was moved to Systems as the company was to acquire a "next generation" computer which would have direct access files and at least some user "glass teletype" style terminals. I was asked to look at all aspects of a vital pricing program which included many individual customer contracts - to see what was needed to be done to convert it to the new system.  As far as I know no-one had tried to move such a complicated application in this way - and I had no guidelines to work to. The examination of the program code (in assembly code for the existing computer) , the systems documents (not up to date), the clerical user manual (hard to understand) and actual computerized customer contracts showed there were many serious problems. A complete re-write would almost certainly be needed.

How things can go wrong

I took the view that the sales management staff (who were in touch with the complex real world problems of the market place) should be in control using terminals, cutting out most of the Chinese whispers paper trail required to specify the existing batch system.  To do this one needed a program which could establish a symbiotic relationship with the sales staff and in order for such an approach to work  they needed to trust the system to do what they wanted. Not only must it be easy for them to instruct, but its should also be able to tell them what it was doing in terms they could easily understand. It also needed to be robust, fail safe, and efficient in terms of computer resources.

When I submitted a draft proposal I had no idea that I was suggesting anything unusual. It was vetoed on the ground that "salesmen don't understand computers" and my argument that "I agree - so we should design computer programs which understand salesmen. - allowing them to handle complex and unexpected real world situations" was dismissed. I was firmly told that the job of a systems analyst was to generate a complicated global model which included a precise predefinition of everything type of sales transaction that might be required. Despite this objection I understand that parts of my proposal were actually implemented after I had left the company.

As soon as English Electric discovered they had not won the Shell Mex & BP contract I was head-hunted to do market research on the requirements for top-of-the-range next generation commercial computers. This was the kind of complex task I enjoyed and involved talking about current problems and future expectations with people ranging from computer engineers responsible for processor design to senior management in large companies who has problems with complex human computer interfaces. After a few months I realized it might be possible to generalize my Shell-Mex & BP idea and that it could be possible to design a computer (using available components) which was fundamentally human friendly and could handle a wide category of hard to define information processing problems.
Designing a human-friendly computer

Such a system was considered to have very considerable commercial potential and within weeks I was project leader of a small research team working on a simulation program to demonstrate that the idea worked. Two years later all the project goals had been met - but there were two serious problems. The first was that, at a time when the idea really needed a lot of creative discussion at the complex application, human psychology and computer theory levels I was told to told I must talk with no-one until patents had been taken out. The other problem was that a major Government-inspired re-organisation of the UK computer industry was now underway, the result being the company merger to form ICL. Virtually all research projects were closed down unless they were immediately relevant to the proposed new  computer 2900 series.

Using computers to help track planes by radar
So I found myself being made redundant - and working on software to track enemy planes on a massive military computer system - ending up as a kind of "trouble shooter" for the software manager. However I had obtained permission to publish (references) and continue the research if I could find a suitable university place. The problem was that I had no idea how controversial my idea would prove to be, or what support facilities I would need to get my ideas accepted.
The problems of

Unconventional research ideas really need a supportive environment to get started and in the circumstances Brunel University was the wrong place to go. It was a brand new university, upgraded from a technical college, with very little experience of research - and no experience of controversial research. As a technological university its main aim was to train students to use the existing technology This was particularly true of the Computer Science Department, which was very poorly equipped at the time. To be fair to Brunel the project made significant technical advances over the years, information was published on the application of the idea in a wide variety of fields (see publication list) and by 1986 a small but powerful demonstration package, MicroCODIL, was trial marketed and received very favourable reviews. However, almost certainly due to my limitations as a project manager, little real work had been done on the underlying theory, and not enough thought had been devoted to the economics of the project or its commercial implications.

In a Tasmanian Forest

Two things happened, which led to the project closing. Years of underfunded working on the project had left me exhausted and I was also suffering from post traumatic stress disorder as the result of a family death. At the same time a new professor considered that, because the university was basically there to train students to work in industry the research was inappropriate and made it very clear that I should leave or do something more productive. I decided to take a break with academic life and started with an informal sabbatical in Australia. Here I found myself working on complex environmental issues including possible effect of climate change. I had intended to resume the research on my return, and planned to develop MicroCODIL to run on PC computers.

On returning from Australia I decided that I could be more use to society if I worked at the local and national level on improving the provision of services for the mentally ill. In addition I could do local history research because no-one complained when I said that historical research was often complex and could involve considerable uncertainty.

Some twenty years later I decided to retire from the mental health field and began to re-assess my earlier research in 21st century terms, using the World Wide Web as an information source. This blog was set up as a result - and in addition to earlier postings two more postings should shortly appear here

The first will look at the changes in relevant research in the 50 years since the work on CODIL started, and the second will look at CODIL as an "experimental symbolic assembly language" for describing the evolution of intelligence.

Monday, 11 September 2017

CODIL & Cognitive Load

I have just started on a FutureLearn Course "Decision Making in a Complex and Uncertain World" run by the University of Groningen - and this has led to some interesting conversations. This will be the first of a number of essays on some of the issues which relate to my work on CODIL.

Cognitive Overload?
CODIL and Cognitive Load

A big thank you to Bruce, who drew my attention to the idea of Cognitive Load and the work of John Sweller and others. It is not surprising that I was not aware of this during my original research on CODIL as John's paper was not published until June 1988 and the CODIL project effectively closed down in September of the same year.

It is clear that while the phrase "cognitive load" had not been invented when I  started the CODIL project the human factors that relate to phenomenon were very much taken into account when I drew up the design of a possible human friendly computer.

There is a very big difference in the task-related factors to be considered when comparing Sweller's work with mine. Much of the work relating to cognitive load has been concerned with human instructors teaching human pupils, and much of the related cognitive load theory is based on research in this area, often using computers as communication tools.

My CODIL research started as a design study into migrating a very large and complicated commercial batch processing system onto a more advanced computer which would have some, by today's standards, very primitive terminals, At the time (1967) no-one had attempted such a large and complex move and there were no guidelines to suggest how it might be done which meant I had a pretty free hand to explore various options. In retrospect it is clear that my pre-computer background working on complex and uncertain information processing tasks influenced the solution I came up with.

I started with the assumption that the sales staff were knew all about the customers, products and market place, and had to deal with unforeseen difficulties. It was sensible to ask if providing them with terminals would allow them to directly control the new system - by directly instructing it themselves, bypassing the systems analysts and programmers of the earlier system. It was also realised that sales staff couldn't control the system unless they could understand how it worked. For this reason one of the things the system must do was to be able to show the human user how the information they had provided was being processed. I was certain that one could not expect the sales staff to do conventional programming - so though about how to design a program that handled sales information in the same way as a salesman did.

Perhaps the idea of "thinking small" started because the amount of space on the terminals was going to be limited - and it was felt that all relevant information should appear on the screen at the same time - as there was not deliberate attempt to consider the work of Miller on short term memory. On the other hand I was personally well aware of the practical problems of limited memory capacity from my pre-computer work

The important step was the discovery that the sales task when viewed globally was very large and complicated - but that it could be broken down into millions of remarkably simple tasks, each of which was very compact. It was only later realised fitted that the number of chunks of sales information in these individual tasks fitted well with Miller's number of seven for the size of the human short term memory.

When the original idea was generalized to handle any suitable application the idea was to produce a system which never overloaded the human user. The approach used in  CODIL simulators was for the user to describe everything in "chunks" (i.e CODIL Items) where each chunk was a named  set, a member of that set, a subset, or description of that set in terms of other chunks. These chunks would reflect the language the salesmen used in processing sales contracts, etc.

Chunks were arranged in statements within a structured knowledge base - each statement representing a "context" and for most tasks these would be less than 7 chunks long. "The Facts" were a statement which defined the current context (i.e. the information to be processed).

Decision making involved using The Facts as a search window searching the knowledge base and merging in any matching contexts. This was a dynamic process as every time The Facts were changed the search window automatically changed,

In effect the knowledge base contained statements that might be (in a conventional computer system) be either program and data - and the search process extracted only the small number of directly relevant statements. The result could be likened to generating a micro-program  tailored to the immediate sub-task. The key thing was that this should be small enough to show to the human observer without cognitive overload. (These micro=programs are always simple as they never contained an "IF" statement simply because non-matching statements are automatically filtered out in the search process).

In designing the algorithm for managing The Facts, the danger of overloading the human needed to be considered. Chucks which were no longer relevant needed to be discarded so that the number of active chunks did not exceed the human user's short term memory. Of course the computer could handle almost any number of chunks - but if it did this would be too confusing for the human and computer to work together symbiotically.

The project was abandoned in 1988, basically because it was being developed in a department dedicated to teaching conventional computer technology and no specific cognitive trials  were ever carried out.  In retrospect research using a CODIL-like system for psychological trials varying the size and working of short term memory would have been useful. Some AI type research was done, and CODIL was used to support a significant heuristic problem solver, but full details were never published because the unconventional approach was not acceptable to the AI establishment of the 1970s and 80s. On the other hand the CODIL simulator was used to support teaching aids for student classes of up to 125 students, and while overload was considered the use was more along the lines of conventional computer-aided instruction.

While I would love to get the CODIL system working and carry out some trails relating to cognitive load there is a problem as at 79 my cognitive load-carrying capacity is not what it was - but perhaps some younger people might like to take up the challenge.

Sunday, 27 August 2017

Evolutionary Steps towards Human Intelligence

Humans are vain creatures and we like to think we are very clever, and concentrate on what we can do. However to understand how our intelligence evolved we actual need to look at our in-built mental weaknesses.

The reason for this is that the Blind Watchmaker of evolution does not plan ahead and often the results seem to be far from optimum. For instance the nerves in the human eye are at the front of the retina and mean that there is a blind spot in our vision. Our vagus nerve takes a roundabout route rather that taking the shortest path – and this becomes ridiculously long in an animal such as the giraffe.  
Similar defects apply to our minds. The short term memory is surprisingly small while our long term memory is unreliable. We think more slowly when processing negative ideas, and suffer from confirmation bias. Unless taught our logical skills have limitations, common sense is not always the most appropriate answer, and we are bad at handling numbers and even worse with more abstract mathematical concepts. We are also too keen to follow charismatic leaders without stopping to check whether their rhetoric makes sense.

The important thing to realize is that these limitations are caused because we are using our “animal brains” in novel ways and defects which were minor at the animal level have started to become significant.

The following draft notes suggest the main steps involved in the evolution of human intelligence starting with the simplest possible animal brain, and how what happened millions of years ago has put restraints on what our brains can do.

Monday, 7 August 2017

Trapped by the Mental Health System

Recent news relating to the mental health provision has been bad. Because of shortcomings in social support provisions some mentally ill people have been trapped in mental hospitals for as much as three years despite the fact that they could be discharged back into the community once suitable accommodation has been found. The fact that they are trapped in this was will not only demoralise the patients, but also the staff who want them to be able to life a more satisfactory life, and also the bed-blocking means that places are not available for others that need them.

In addition staff are demoralised by lack of pay and other negative factors affecting the way that the NHS is being run and how the mental health area often appears to be at the bottom of the pile. As a result recent news also revealed that there is a very large number of unfilled posts, putting more strain on the staff who continue to work in the mental health field. European workers in the NHS will have seen their pay fall compared with their home country as the effects of Brexit start to become apparent and the "We hate foreigners" feeling underlying the policy of a so-called Christian Prime Minister will discourage others to come to work in the NHS in future. Of course the Government has responded with a long term promise to train more mental health staff - which will not help the rapidly deteriorating current situation - and I suspect that like most such promises in the past this will involve some form of robbing Peter to pay Paul

However what really made me feel ill was the Judge going public about a seriously mentally ill young lady, who was kept in a strip cell because she might use any furnishings to kill herself, and for whom no place for proper treatment could be found.

This links to my logo - as such cases are nothing new - and when I was drawing up my logo I had vivid memories of my daughter Lucy in a cell in the "Muppet House" in Holloway Prison - the "hell hole"where all the desperately mentally ill prisoners were held. In effect what happened to Lucy caused me to suffer post-traumatic stress disorder - and abandon my University Research.

Thursday, 27 July 2017

Are you snowed under by the data explosion?

This blog has been rather quiet recently, in part because I have been distracted by external health, social and political issues - an found the social media pretty depressing when considering the effects of Brexit and Trump. Another problem was that in trying to write up my research into the evolution of the brain I have been snowed under with new ideas. 
Looking back over recent years I have come to the conclusion that I was most productive when I was doing a Futurelearn course (even if it was not directly relevant to my research) because discussing issues with other students helped me to think more creatively. As a result I am joining the course "The Data Explosion and its Impact on Fraud Detection" as it is clearly relevant to the way humans , and information about them, interact with modern computer systems. As the course is only scheduled for three weeks, and has ambitious aims, I suspect it will be rather superficial - but I am sure that I will find the interactions with other students stimulating.

So far I have introduced myself to the other students as:
I was working on how to interface humans with computers as early as 1967 and by the 1980s was interested in the potential privacy implications of exchanging information online in the context of the scientists attending an international conference, where the organisers lived, and had computers in different countries with different data protection regulations. Now long retired I am interested in keeping in touch with developments that I could never have imagined when I started work. In retirement I am still very interested in the ways humanity has become "Trapped by the box" (my website is trapped-by-the-box.blogspot.com) and I am interested in modelling the evolution of human intelligence.
I will post comments on how I find the course later.

Wednesday, 28 June 2017

Trapped by inadequate building Regulations

I am sure that I am not the only person who has had nightmares after this horrific event - and am reminded of an earlier disastrous fire in which burning plastic made in the town where I live resulted in 55 deaths. As our M.P., David Gaulke, was only 2 years old when it happened I decided to write to him to remind him of the earlier event, caused by plastic made in his constituency:-

I am sure your, like everyone else, is horrified at the Grenfell flats fire and will have sympathy with everyone affected by the fire. The immediate issue is to stop jerry-building with inflammable materials as soon as possible. However it is very important that no stone remains unturned to identify the weaknesses in the system of  building regulations and ensure that they are properly enforced.

I am writing to make you aware that there will be a number of people living in the North West of your constituency who have particular reasons to be horrified at the Government’s failure to being in adequate building regulations relating to the use of inflammable materials in building construction, and in particular in keeping the regulations up to date as technology changes.

You are clearly too young to remember the Summerland fire at Douglas, IOM, in 1973 when 55 people died. Many deaths and injuries were caused because, to allow light into the entertainment centre, the roof was made of large plastic panels. The fire spread to the roof which melted - showering the people below with drops of blazing plastic. These panels had been made by a company called Williaam Cox, which was then based in Tring - their factory having now been replaced by Tring's Tesco store. Obviously Tring people who worked in the factory will be particularly distressed when they learnt of the death and destruction their product had caused.