Wednesday, 9 July 2014

Filling the Black Hole in Brain Research

Three years ago I posted Brain Storms - 2 - The Black Hole In Brain Research suggesting why there were problems in bridging the gap between the brain's neural network and human intelligence. Earlier this year both the E.U. and the U.S.A. announced multibillion pound projects to try and bulldoze a solution. P.Z. Myers has now posted a blog What are you going to simulate? on Pharyngula and I have posted the following comment, which may be lost among nearly 100 other comments on his site.
:::::::::::
The research seems to be working on the assumption that if we knew all the connections we would automatically understand how the brain works and what makes us different to animals. I would suggest that the best way to understand how the brain works may well be to start at the animal end and consider the possible evolutionary pathways. For this reason I agree with P.Z. when he says

 What the hell? We aren’t even close to building such a thing for a fruit fly brain, and you want to do that for an even more massive and poorly mapped structure? Madness!” It turns out that I’m not the only one thinking this way: European scientists are exasperated with the project.

I am working on an evolutionary model of how a network of neurons could meaningfully communicate by asking “What is the simplest decision making model of a brain that could be sufficient to help an animal survive.” The model has three basic operations - RECOGNISE known pattern, COMPLETE an incomplete pattern, and REMEMBER a new pattern. In addition there is an OPTIMIZE function which increases the importance of useful patterns and forgets redundant patterns, but is not directly involved in the decision making process. At this level the model predicts the existence of concept cells and mirror neurons.

Of course this model is really crude in terms of formal mathematical logic and does not even support the idea of an explicit NOT. Many years ago the AI guru Minsky pointed out why, in mathematical terms, such models were not logically powerful enough, but any ten year old who has been taught about Venn diagrams could point out the flaws. So it would appear obvious that research in this direction would be a waste of time. After all we know we are ever so intelligent, and therefore there must be some kind of philosopher’s stone of intelligence, possibly in the form of some special genetic mutation, that makes our brain different to a primitive animal brain.

As far as I can determine everyone has assumed that our marvellous human brain could not have such an appallingly crude driving mechanism at the heart of it – and in making this assumption we have forgotten how evolution can be very good at getting the best out of unpromising material. As a result I am looking at how such a crude system might evolve into a human brain.

An important resource in this process is the archive of mainly unpublished research findings relating to an attempt to design a “white box” information processing system to help the human user to handle incompletely defined information processing tasks. The planned white box system handled recursively defined associatively addressed set names in a bottom up manner while the conventional “black box” computer is a top down rule based system that processes numbers in a numerically addressed linear store. The research showed that the approach was able to handle a wide range of non-numerical tasks, including A.I. style problem solving, but for non-technical reasons the work was abandoned over 25 years ago.

The original white box proposals included a number of conventional computing type features, such as the ability to do arithmetic and to drive a computer terminal, but if these frills are stripped off the inner workings can be mapped onto the crude brain model. A comparison demonstrates that such a simple approach could handle large quantities of poorly structured information – could support at least a simple language, and even morph into something like a programming language if required to handle complex sequential tasks.

Once it is realised that the primitive brain model can do significant useful work a consideration of evolutionary pressures suggests that there is a barrier to animals being more intelligent set by the amount of useful information that can be learnt in a lifetime. However once a primitive language becomes an efficient way of transferring reliable cultural information between generations the barrier falls away, and cultural evolution takes off like a rocket. As language is a tool that can be learnt and improved, the language will rapidly become more powerful, allowing information to be transferred even faster. A minor genetic change in the learning mechanism would speed the process even more – but make it more likely that people would be more inclined to follow charismatic leaders without question (i.e. religion?). In addition it seems very likely that information learnt in abstract terms via language would use the neural network more efficiently, increasing the brain’s knowledge capacity with no increase in physical size. In modern humans the cultural information serves to hide some of the limitations of the crude internal workings – but some important human brain failings, such as confirmation bias and unreliable long term memory, are predicted by the model.

Basically the model predicts that there is no major difference between the way our brain works, and an animal’s brain works. The difference is by using language we have greatly increased our rates of learning and that what our intelligence is virtually entirely due to cultural knowledge.

No comments:

Post a Comment