Monday, 1 June 2015

Article: Algorithms aren't Everything

ALGORITHMS
AREN'T EVERYTHING
ITNow, Summer 2015, pp 60-61

Chris Reynolds FBCS follows up on Chris Yapp's Future Tech  post 'The Limit of Algorithms' by looking at how that the explosive growth of the computer industry may have led to unconventional research on how people process information being abandoned.

   
The algorithmic approach to computing is limited because it assumes that the rules relating to the information being processed are known in advance. Unfortunately the real world doesn't always work in this way. While the computer industry has been remarkably successful there are two major weaknesses relating to the human interface.
    One relates to handling open-ended real world information - the classical difficult application is that of medical records where a wide range of medical staff need access to information on people with one or more real (and sometimes imagined) diseases, and where diagnoses, treatments and medications are continually changing.
    The other is AI's failure to come up with satisfactory models of how the human brain processes information. This black hole in our knowledge makes it harder to interface people with computers.
    In practice it has proved far easier for flexible humans to learn to live with the limitations of computers rather than to design computers which are compatible with the open-ended way in which the human mind works.

Automatons

    For example a typical modern supermarket store can be seen as an intelligent profit-motivated computer using floor staff as slave-like automatons.
    Why has this situation arisen? Early computers were designed to carry out highly repetitive mathematical tasks because humans can't do these quickly and reliably.
    The computer industry advanced so fast, and was so successful, that there was no time to stop and question the underlying assumptions. Everyone 'knew' that what was needed was faster processors, bigger memories, better software and more versatile peripherals. There were big prizes to be won in a world of devil take the hindmost. 
    Many companies started to develop interesting ideas which were lost in the stampede to get products on the market and often the market leader, rather than the best thought out design, became the industry standard. 
    Once it became possible to use terminals to interact with computers, important pioneering work was carried out on the human interface at Xerox Parc. However they concentrated on finding ways to hide the almost incomprehensible black box and never went back to first principles. 
    But why should they? Turing and Von Neuman had produced a 'universal machine' so there was clearly no need to look for a fundamentally different information processing model.

Understanding people

    In fact at least one of the 'lost' research projects explored the possibility of building an information processor that understood people. Early in 1968 the LEO computer pioneers David Caminer and John Pinkerton funded research into a 'white box' computer with a human­ friendly symbolic assembly language called CODIL (COntext Dependent Information Language). 
    The proposal was triggered by a design study on how Shell-Mex & BP's vast batch sales accounting programs could be put online for the first time. This suggested that, instead of pre-defining every possibility in advance, one should produce a friendly system that the management and sales staff could understand and directly control in real time. The original plan was to re-microprogram an IBM 360 CPU (for example) to handle structured lists of associatively addressed words rather than arrays of numerically addressed numbers. 
    The goal was to provide a direct human­ CPU interface applicable to a range of potentially interactive tasks which were difficult to fully define in advance. 
    As the originator of the idea I was made project leader. Despite getting off to a good start the work was aborted when ICL was formed, as it was incompatible with plans for the 2900 series of computers. ICl allowed the work to continue on the understanding that it was not promoted in a way which criticized ICL's decision to stop supporting it. Unfunded work on the language (using a simulation on conventional hardware) continued for some years. 
    The work ended (due to failure to attract funding) just as the trial marketing of a school software package based on the idea was attracting many favourable reviews. A paper 'CODIL: The Architecture of an Information Language' (Computer Journal Vol 33, pp 155-163, 1990) describes the project, including references to many earlier publications. 
    Recently I decided to re-examine the project files in the light of present day research. While there were times where things might have turned out differently, the real obstacle was the way the project was caught up in the 'must be commercially profitable' race. I was told to 'build a software prototype' and talk to no-one until the patent comes out. In retrospect what was needed was an in-depth multi-disciplinary study into the underlying science. 
    Even when the patent gag was removed work continued on building and testing simulators rather than thoroughly probing the underlying theories.

A priori knowledge of the rules

    This lack of theoretical understanding made it difficult to get potentially controversial papers published and to get research funding. The counterintuitive approach underlying CODIL arises because modern society prizes knowledge (for good reason) and despises ignorance. 
    A successful computer program is one which has perfect a priori knowledge of the rules relating to a well-defined task. Incomplete or inaccurate understanding of the rules is 'bad' because it leads to the program's failure, sometimes in disasterous ways. 
    CODIL assumes that ignorance of what individual human users want to do is a 'good' starting point for a genuinely open­ ended interface. It is a pattern recognition approach which grows a knowledge base from the bottom up and makes no formal distinction between 'program' and 'data'. 
    However, when high level patterns start to emerge, the model can morph into something resembling the algorithmic model - with patterns segregated into groups representing 'program' and 'data'. It thus combines the flexibility of handling the unexpected with the formality of conventional computer programming when appropriate. 
    In the 1970s I thought that this approach modelled some aspects of human thought. and made unsuccessful attempts to relate the research to contemporary artificial intelligence research. I used CODIL to implement a powerful heuristic problem solver but papers describing problems actually solved were repeatedly rejected as 'too theoretical' to ever work. I looked at providing CODIL with a natural language interface - but couldn't reconcile it with Chomsky's model. I never looked at the possibility of implementing CODIL on a neural net after reading Minsky's views. 
    The hostility I encountered from the AI community left me very depressed and reluctant to publish, or apply for grants. While later work, using CODIL as an educational tool, looked promising, various non-technical factors led to my abandoning the project. Now the formerly highly praised 1970s AI research is considered an irrelevant backwater, few linguists still agree with Chomsky, while many consider Minsky's views significantly delayed research in neural networks by many years. 
    A reassessment of CODIL now seems appropriate. I find that the basic idea can be mapped onto a mathematically unsophisticated neural net and there is a possible evolutionary pathway linking activity at the neuron level to human intelligence. 
    An animal with such a brain would find it a useful survival tool but learning times limit what it can do in a lifetime. Once language allows one generation to communicate rules to the next the 'lifetime' barrier is breached. The fact that the CODIL-like pattern recognition model can morph into a rule based model provides a framework on which a simple language can boot itself up into something really powerful. 
    The result is a major tipping point in 'intelligence' which does not require any increase in brain size or genetic mutation. This is because learning general rules by instruction uses less memory that learning large numbers of individual cases by trial and error. 
    Virtually overnight there would be a switch to a potentially very much faster cultural evolution. This fits with archaeological evidence that suggests that humans suddenly became better toolmakers about 100-150,000 years ago, with no obvious increase in brain size. Other aspects of the CODIL model mesh with the ideas of concept cells and mirror neurons at the neuroscience end of the scale, and with confirmation bias and unreliable long term memory at the human psychology end. 
    The approach suggests that the main difference between human and animal brains, apart from size-related factors, is that language means that humans are far better at educating their young. 
    Of course this brain model is currently no more than a thought experiment and if there is any merit in the approach I leave it to a younger generation to follow up  although I may be able to give some advice. 
    Whether I am right or wrong the real lesson to be learnt is that in the rush to exploit the computer some interesting unconventional scientific ideas failed to get the degree of attention they deserved. 
    I am sure that others started thinking of alternative human-oriented information models and, like me, were actively discouraged from going further because 'computer technology is so successful that the underlying science must be right'.

Chris Tapp's original post is at www.bcs.org/conBlogPost/2403

No comments:

Post a Comment