Tuesday 6 March 2012

Bridging the Gap between Animal Brains and “Intelligent” Human Brains.


In drafting The Evolution of Intelligence – From Neural Nets to Language I was perhaps trying to get a quart into a pint pot – as I had a lot that I wanted to say which could not reasonably all fit into one post. First reactions show that I failed to clearly state how the whole fitted into an evolutionary model. So here goes ...

Basic Evolutionary Principles
  • All land vertebrates have basically the same body architecture
  • Some mammals have made extreme modifications to the basic architecture simply by stretching some bits and/or shrinking others.
  • To understand the changes in a particular case we need to understand
  1. the original organ that has been stretched
  2. The way it has been stretched
  3. the evolutionary pressures on it to expand or not
  4. Any special adaptations or trigger points necessary to explain the extreme stretching.
The Key Question
  • In the Human species something has expanded in the brain apart from its physical size. What is it and what factors would have been relevant in its expansion?
My Suggested Answer in General Terms
  1. All land vertebrate brains need to have a simple “trial and error” mechanism for classifying relevant features of their environment, remembering the past, and using these memories to suggest appropriate actions. This should include an accelerated ability to learn about dangers.
  2. This basic mechanism, with a few comparatively small modifications, is sufficient to explain what the human brain does given the right resources in term of physical brain size and time to learn. In behavioural terms most of what we actually see is cultural and need not represent any major changes in the brain's internal way of working.
  3. While it might seem obvious that being more intelligent is an evolutionary advantage in practice this is not the case. This is because a large brain is an expensive organ to run, the “learning” process is slow, and everything learnt is lost when the brain dies. Species which use their limited resources on other ways of staying alive are more likely to be successful.
  4. The more information that can be transferred from one generation to the next the more economic it becomes to expand the brain's information holding capacity. There are a number of potential trigger points where improving such communication significantly increases the advantage of being able to store and process more information. One such trigger point is undoubtedly the ability to name objects and actions by associating symbolic noises and/or gestures with them. Another could be the modification of the “learn about dangers quickly” mechanism that must be present in all animal brains to apply to information transferred from older members of the species.
My Specific Contribution to answering the Key Question
There has been much debate in the literature as to the validity or not of such models. My specific contribution to the debate is to show that a highly unconventional artificial language called CODIL can demonstrate significant information processing capabilities AND appears to be compatible with the very simple information processing model which all land vertebrates must have to some degree. I do not claim CODIL provides a perfect model of how the brain processes information, or the steps needed to get to where we are today. However it demonstrates possible mechanisms by which a basic animal brain could be stretched to support something approaching natural language and human intelligence.

If your reaction is to reject the CODIL approach because it was a byproduct of computer modelling I suggest you look at How many trucks will your helicopter pull?

Friday 2 March 2012

How many trucks will your helicopter pull?

From clipart by clopartof.com and clker.com
Of course you wouldn't ask such a question - would you?

Are you sure? Imagine an inventor 150 years ago who was hoping for funds to build a helicopter. The idea of being able to move from anywhere to anywhere sounds exciting so any potential financier would need the idea to be approves by his transport experts who you will find at the railway station. They would ask questions such as "We have 10,000 soldiers who want to travel from London Euston to Manchester Piccadilly - how much would it cost to send them by helicopter?" When the inventor replies that he would send them by train the conclusion will be that he has no confidence in his own invention. When asked "How do you ensure that your helicopter stops when the signals on the line are are red?" the inventor's repost that this is not a sensible question is taken as evidence that he is totally ignorant of how the great British transport system works. Needless to say the inventor would not get his invention funded.

Thursday 1 March 2012

Neural Nets or Networks of Neurons?


A comment on  The Evolution of Intelligence – from Neural Nets to Language. says:
I believe it is time we consider Post Neural Networks models, to overcome limitations of the NN model in such applications as natural language
I suspect that this was meant to be a criticism - but I agree (in the same general terms as the original comment) with the sentiment.

Part of the problem relates to terminology, the fact that I have been out of academic circles for over 20 years, and at 73 and with no east library access I am not in a position to absorb all the research in all area relating to the brain, its evolution, language, etc. carried out since I retired. As a result I will occasionally use words in a common sense "everyday" use when they also have specialist meaning to experts in one field or another.

As far as terminology goes I have used the word neural net to mean the network of neurons in the brain - and not specifically some artificial intelligence research mathematical model. There can be little doubt that the brain contains a network of neurons connecting the nerve cells - and they apparently play an important function in how the brain works - and I am sure we couldn't speak if all the neurons were suddenly removed. Thus any model of language MUST, in some way or another, depend on understanding how the neurons and nerve cells in the brain work.

One must also consider the problem of models in general and what they can and cannot do. As someone who graduated as a chemist, and did a Ph.D. relating to theoretical chemistry I am well aware that for some tasks you need multiple models. When I cook a meal I do not, for example, try to explain what is happening in terms of wave functions.

If we want to relate what the neurons are doing with natural language we need suitable models. If you ask whether a typical Artificial Intelligence Neural Net model could directly support natural language you are asking a question (in modelling terms) of whether one could use quantum mechanics to predict which sperm will fertilize an egg or whether a stored program computer's central processor could run a major air traffic control system without intervening software.

What I am saying is that if you are to relate what happens at the neuron level with the works of Shakespeare you will need intermediate models. Modern stored program computers work because they invoke a onion shell of models (called programs) between the electronic and the what the user actually sees and does at the keyboard.

So the question we need to ask is not "Can we go directly from a neural net to natural language?" - but rather "What intermediate models do we need to introduce to bridge the gap?"  CODIL was designed as a "white box" information processing tool to help people with a range of potentially open-ended tasks - and while it was not taken up commercially it has been demonstrated to be potentially very powerful. Because funding was not obtained to allow the research to continue it could well be able to support a natural language system. What I realised when I decided to pop my head up from retirement was that its architecture was compatible with at least some ideas about neural nets.

While I would be the first to agree that it may well not be THE ANSWER it at least looks as if it could provide the basis of a "first attempt" modelling of the brain's equivalent to a stored program computer's symbolic assembly language.

If this is what Post-Neural Research is looking for we are in 100% agreement.

I had a dream last night - how bizarre



 Dreams may often be bizarre – but what was bizarre about this one is I almost never have any dreams. I was walking a dog. I think it was my daughter's dog Kayleigh, somewhere (like most dreams the memories are fragmentary) and then I was taking some post cards out of a box to show someone, and when I came to put them back the box had vanished. A wild search followed, should I call the police, should I tell the university authorities, and where was Kayleigh?

London Bus with Picture Post Advert
However the subject of the dream really didn't matter. As a child I had vivid nightmares of a huge monster which prowled the streets and could eat you up. For anyone who lived in London during the war they would have taken the “monster” for granted, but as a 5 year old who had been brought up in rural Somerset I had never seen a double decker bus before – much less one with huge eyes. Perhaps this experience, at a formative stage, “convinced” my brain that dreaming (at least with visual images) was something to be avoided.

So why did I dream last night? It could have been that I took more exercise than usual (a 2 mile walk to the swimming pool (40 lengths) followed immediately by half a mile to the supermarket and half an hour pushing a trolley. Well I was tired but I don't think that was the main cause.