Monday 8 April 2013

Looking for the "Neural Code"


Going through an older section of my email inbox I found a Scientific American link to John Hogan's blog post Do Big New Brain Projects make sense when we don't even know the “Neural Code” in which he wrote:
Neuroscientists have faith that the brain operates according to a “neural code,” rules or algorithms that transform physiological neural processes into perceptions, memories, emotions, decisions and other components of cognition. So far, however, the neural code remains elusive, to put it mildly
The neural code is often likened to the machine code that underpins the operating system of a digital computer. According to this analogy, neurons serve as switches, or transistors, absorbing and emitting electrochemical pulses, called action potentials or “spikes,” which resemble the basic units of information in digital computers.
I prepared the following, perhaps too lengthy, comment but when I came to post it I got a message that the page had been moved - and all attempts to find it resulted in irrelevant pages on the Scientific American web site. So I am posting my response below:
------------------------------
I would argue that the problem with modern brain research is an inability to see the wood for the trees. Of course if you look in detail at the brain things get very complex. But such complexity is common in science. The key idea underlying evolution is very simple – but when you look at individual cases in detail there can be enormous complexity. The same applied in medieval times when the movements of the known heavenly bodies appeared to be very complex – until it was realised that things became much simpler if you calculated the motions of the planets using the sun, rather than the earth, as a key reference point.

The problem with the human brain is that, as John Hogan says, we don't know the “Neural Code” and virtually everyone is looking at the problem in ever greater detail – apparently on the assumption that the harder you look at the fine detail the more certain you are to find out the shape of the wood!

I have been trying to stand back and get an overview and have come up with an “ideal brain” model which in some ways parallels the “ideal gas” model in physics. All neurons (like all gas molecules) are identical – and the dynamic links between neurons are like the dynamic collisions between gas particles. Using such a simple model it is possible to “grow” a brain that can remember and use more and more complex concepts – with the complexity of the most advanced concepts it can handle depending of the brain's capacity and time for learning. The model explains consciousness and can predict detailed observations about the brain - for instance the so-called “mirror cells” in the brain turn out to be nothing special as the observations simply reflect the way that all neurons work in the “ideal brain”. In addition it is possible to ask how human “intelligence” might evolve and this approach predicts a major tipping point (rather than some major genetic “improvement”) which produces and “explosion” of “new ideas” when “cultural intelligence” becomes a more effective tool than the innate biological “intelligence” of the “ideal brain” model.

The problem with the model, and possibly the reason why it appears not to have been explored before is that to a “culturally matured” mind (and all people accessing this text on the internet will be culturally mature) the model involves several counter intuitive steps.
  1. The model assumes that at the genetic level the only significant difference in the processing mechanisms between our brains and most animals relates to supercharging effects (more capacity, more links, more effective blood supply, etc.), and that if there is a difference the model actually suggests a reason why we might be genetically less intelligent that some other animals! Before you shout me down over this “outrageous claim” I should point out that the model suggests why culturally supported intelligence is infinitely more effective than the genetic intelligence foundation on its own.
  2. You have to forget everything you have learnt about computers and algorithms. The definition of a stored program computer model requires there to be a pre-defined model of the task to be performed. The “ideal brain” model starts by knowing nothing about anything and has no idea what kinds of tasks it will be required to carry out. Virtually all it does is store and compare patterns without having any idea what those patterns represent. Once you start looking in great detail at how specific name tasks are processed you have taken your eye off the ball - as you are asking about what the brain can learn to do - and not what the underlying task independent mechanism is.
  3. Everyone knows there can't be a simple model of a Neural Code – because with so many people are looking someone would have found it if it existed - so there is no point in looking ...
  4. My research has “reject” stamped in all the standard “Winner of the Science Rat Race” boxes. I make no secret that I am 75, am not currently associated with any established research group, and the only facilities I have are a P.C. in a back bedroom, access to the internet, and access to some old research notes on a long abandoned blue sky project which was trying to design a human friendly white box computer to replace the standard human hostile black box computer everyone takes for granted.
If you are interested I hope to have a detailed description of the “ideal brain” model on my blog later this month.

No comments:

Post a Comment