Monday, 29 April 2013

A Simple Guide to the Relationship between Neurons, Natural Language and CODIL

I have posted the detailed discussion paper Fromthe Neuron to Human Intelligence: Part 1: The “Ideal Brain” Model and my idea is to supplement it with brief notes examining various topics, including any raised by comments. This is the first of those notes

A noun such as Macbeth, or Dagger, or Author is represented in the brain as a somewhat amorphous network of neurons which I have called a memode.

Memodes contain other lower level memodes. Thus Murderer will contain Macbeth and Crippen, while Author will contain Shelly and Shakespeare. People will contain sets such as Murderer and Author and individuals such as Churchill.

A memode may also represent a context where several nouns are associated. An example of a context would be Macbeth; Duncan; Dagger. Another might be Macbeth; Shakespeare.

The ideal brain model connect up the links – so the above two examples can be merged as Macbeth; Duncan; Dagger; Shakespeare.

As Macbeth is a Murderer we can expand the above to the context Murderer Macbeth; Victim Duncan; Weapon Dagger; Author Shakespeare. While we are only using nouns it is easy to relate this to a natural language statement such as “According to Shakespeare Macbeth used a Dagger to kill Duncan.”

CODIL was a blue sky project to try and provide a fundamentally human friendly information processor for handling a range of non-mathematical tasks. In MicroCODIL (a demonstration version that runs on the BBC Microcomputer and uses colour) the above example would be represented as

1 MURDER = Macbeth,
2   VICTIM = Duncan,
3     WEAPON = Dagger,
4       AUTHOR = Shakespeare.

While the ideal brain model works by making links within a network of neurons, and CODIL works by moving symbols around a digital store, the two processes are equivalent.

The CODIL idea was triggered by research on a very large commercial data processing system, and has been trialed in medium sized poorly structured data bases (medical and historical data), providing online tutorial material for classes in excess of 100, as a schools package for demonstrating a wide range of information processing ideas, and in the area of artificial intelligence. A package called TANTALIZE used CODIL to solve 15 consecutive Tantalizers (now called Enigma) published weekly in the New Scientist.

The parallel between the ideal brain model and CODIL suggests that the ideal brain model could probably support a reasonable level of natural language skills – but more research is required. The bottleneck as far as the basic ideal brain model is concerned relates to the speed of learning – and this issue will be addressed in Part2: Evolution and Language.

Evolution and Alfred Russel Wallace

Alfred Russel Wallace
Last night I watched part 2 of Bill Bailey's Jungle Hero - a two part BBC TV series on the life of Alfred Russel Wallace - and immediately it finished I switched to the BBC Iplayer to watch part 1. If you are interested in the origins of the ideas behind evolution, and the difficulties of persuading the establishment to change its way of thinking I am sure you will find the programmes really interesting - as I did.

Friday, 26 April 2013

From the Neuron to Human Intelligence: Developing an “Ideal Brain” Model

I have just posted a discussion paper: From the Neuron to Human Intelligence: Part 1:The “Ideal Brain” Model which will shortly be followed by Part 2: Evolution and Learning. In these papers I propose a model which suggests how the electrical activities of neurons in the brain may be related, via evolutionary probable pathways to high level activities such as language and intelligence. If the model is even reasonably accurate it could have implications in many different specialist areas where an understanding of how the brain works is relevant.

It is clear that more research is needed to establish the validity of the model and my problem is how to go about both publishing and organising any further research, especially as some of the ideas are counter-intuitive – which can make communicating them difficult. If I was a young academic just starting out on a research career and working in a supportive university there would be some relatively obvious options. However I am 75 years old, my only resource is a personal computer with access to the internet, and I currently have no active contacts with any major academic institution. As a scientist through and through I feel the idea should be followed up, and as an old age pensioner I would be happy to hand the matter over to a younger generation and enjoy retirement.

Bearing in mind my limitations the approach I have taken is to use this blog as the means of stimulating discussion of the issues and disseminating information about the research.
  1. The two papers have been kept comparatively short to make them more readable. If I tried to address every possible research issue that might be relevant it would take me far too long and the texts would become unreadable.
  2. Anyone who want to see more examples of how CODIL works, its applications, etc. can look at the many CODIL papers already online. In addition I have other reports (some only in draft form) and actual computer listing of other applications – and these can be posted online if appropriate.
  3. If anyone has difficulty in understanding any points, and/or has specific questions – I will be happy to answer them via this blog. In particular if you are doing some brain related research (in the widest sense) send me details (remembering I may have problems with pay walls) and I will happily give you my suggestions. After all a good test of my ideas is whether I can answer your questions convincingly.
  4. If there is enough interest I will try and make arrangements to make MicroCODIL software and manuals available to anyone who has access to a BBC Microcomputer. (Because the computer has become something of a cult survival second hand ones are often available.)
  5. Should I be able to help an existing university research project by giving a talk, attending a seminar, etc., I am happy to do so. Even if you don't agree with me exposure to controversial ideas can help everyone to start thinking outside the box.
  6. If a particular research group wanted to resurrect any of the CODIL programs and applications, or use them as a basis for an “ideal brain” simulation I would be happy to advise.

Thursday, 25 April 2013

Rigid Legal Rules can Kill

Some of you may not realise the symbolism behind this site's logo. Occasionally a news item can bring back sad memories.  This evening Channel 4 News reported that:
Two mothers, both of whose 17-year-old sons committed suicide after being detained by the police, wept in court today as a judge ruled the treatment of 17-year-olds in police custody is unlawful.

Sunday, 21 April 2013

Coelacanths and Confirmation Bias

I recently read an interesting blog by P Z Myers entitled Coelacanths are unexceptional products of evolution - which discussed why it was inappropriate to call them "living fossils" which were "slowly evolving". It included the following example showing confirmation bias in the scientist researching this interesting fish:

So why is this claim persisting in the literature? The authors of the BioEssays article made an interesting, and troubling analysis: it depends on the authors’ theoretical priors. They examined 12 relevant papers on coelacanth genes published since 2010, and discovered a correlation: if the paper uncritically assumed the “living fossil” hypothesis (which I’ve told you is bunk), the results in 4 out of 5 cases concluded that the genome was “slowly evolving”; in 7 out of 7 cases in which the work was critical of the “living fossil” hypothesis or did not even acknowledge it, they found that coelacanth genes were evolving at a perfectly ordinary rate. 
Research does not occur in a theoretical vacuum. Still, it’s disturbing that somehow authors with an ill-formed hypothetical framework were able to do their research without noting data that contradicted their ideas. 

Rural Relaxation: I see a "Dragon" on Bookham Common?

I like to spend and hour or so each day walking, preferably in the countryside. Yesterday I was walking past the ponds on Great Bookham Common, Surrey, and was surprised to see in the distance a large animal which had come down to drink on the far side of Lower Hollows:

Wednesday, 17 April 2013

Unconventional Ideas and the establishment

There have been further comments on Robin Ince's post "The Fascism of Knowing Stuff" including some relating to the idea  of interesting unconventional ideas being suppressed as a result of peer reviews and Nullifidian wondered whether there were any real "anonymous" scientists who had problems - so with the following comment I stood up to be counted:

Sunday, 14 April 2013

Don't confuse Science and Technology.

Having read Robin Ince's post "The Fascism of Knowing Stuff" I felt he was confusing Science and Technology and added the following comment to his post.
I agree with your definition of science but at the end you are talking about technology as if science and technology were one and the same thing. Of course the two are closely linked but what the average person sees is not “pure” science but rather technology – and they only see that technology because someone is making money out of it!

There are many problems. If an early version of a technology is commercially acceptable better versions can be blocked because people have adjusted to the original technology (which may have become an international standard) and there are more people wanting the old technology (even if science has shown it to be inferior) than would benefit in the short term if the improved technology were introduced.

A good example is the QWERTY keyboard which was used on early typewriters, then on teleprinters, which were used as early input devices for computers … Much excellent research has been done on better keyboard, using the latest scientific advances – but QWERTY is still with us, although its is being replaced in some areas by completely different forms of information input.

The problem of competing technologies is illustrated by the triumph of VHS over BetaMax (which was said to be technically better) because the real battle was who would get the biggest market share – as people would buy the system with the biggest collection of recordings.

This raises a potential trap – if a new technology comes along and is extremely successful because there was no competition its total domination of the market would make it almost impossible to develop and market improved versions – and as a result it could be difficult to fund blue sky scientific research which questions the foundations of the technology.

Let me suggest where this may have already happened. The stored program computer emerged in the 1940s and was soon seen was a money spinner – with many companies rushing to get a foothold in the market. The rat race to capitalise on the invention has resulted in systems which dominate everyday life in much of the world, where the technology is taught in schools and everyone knows something about how computers work – if only in the form of an inferiority complex because “they are too difficult for me”.

In fact it is considered as an unavoidable truth that computers are black boxes where the internal workings are incomprehensible to the computer user. But the stored program computer is incomprehensible because computers were originally designed to process mathematical algorithms carrying out tasks which the average person would also find incomprehensible. The problems computers were designed to solve are about as far from the problems faced by early hunter-gathers as it is possible to imagine.

There must be an alternative. It is well know that nature has produced information processing systems (called brains) which start by knowing nothing (at birth) and can boot-strap themselves up to tackle a wide range of messy real world tasks. In the case of humans their brains can exchange information and people can work together symbiotically.

So which scientists in the 1940s was saying that blue sky research into whether a “human friendly computer” that worked like a brain would be possible?. … or in the 1950s? … or in the 1960s? … …

If you look through the literature virtually everyone who ever though about the problem was taking the stored program computer for granted. You will search the old literature in vain – and when people started to worry about the human user interface it was about writing programs to hide the inner black box from the human user. No-one was going right back to first principles to see if there was an avoidable weakness in the use of the stored program computer. And – because they were thinking of analogies with the stored program computer – it was taken for granted that the brains “computer” must be so clever it was very difficult to understand because it was “obviously” difficult to program. In effect the very successful technology was beginning to influence the way that scientists were thinking about research into how the brain works.

In fact in 1968, backed by the team which built the Leo Computer (the world’s first commercial computer), work started on early studies with the purpose of designing a fundamentally human friendly “white box” information processor. I was the project leader and the project ended up under the name CODIL. The problem we faced (which has got worse over the years) is that even if it had been successful (and results with software prototypes were very promising) it would have to battle with the established stored program computer market. Look at the investment in hardware, applications, data bases, trained staff, public understanding, etc. etc. of conventional systems and the inertia against possible change is probably valued in trillions of dollars.

To conclude I suggest that, because the computer revolution was technology led, key blue sky research was never done – and anyone proposing such blue sky research now is more likely to be greeted with hostility rather than adequate research funding.

Nullifidian replied - and the relevevant part of his reply was: 
Finally, I didn’t use the phrase “anonymous scientists” to invite people who thought that peer review had done them wrong to submit their tales of woe. Frankly, I don’t care. The point I was making there was to say that there are plenty of ways to get information out to the scientific world, and publication is actually the least efficient of these and arguably mostly irrelevant. Conferences, preprints, presentations before other university departments, etc. are where the scientific action is. However, all these means of getting around the peer review process require that your work actually be as interesting to your colleagues as you think it is.

In your own case, you haven’t demonstrated that the peer review system has suppressed a scientifically worthy idea. You cite the absence of people “go[ing] in [your] direction” as evidence that these views have been “crushed by the establishment at an early stage”, but an equally potent hypothesis is that your ideas are unworkable and nobody wants to spend their time trying to make the unworkable work. While I can’t say without seeing your ideas in full, the notion that you can just switch from computation to talking about the brain without any apparent background in neuroscience is another indication that you’re a crank. So is the use of coined terms and irrelevant jargon. In what way is a brain similar to an “ideal gas”? An ideal gas is hypothetical state in which the molecules all randomly moving small, hard spheres that have perfectly elastic and frictionless collisions with no attractive or repulsive forces between them and where the intermolecular spaces are much larger than the molecules themselves. None of these things are true in practice, of course, but they’re close enough to the model in most cases that it makes no difference. Now, neurons are not small hard balls, they don’t move in random directions and collide elastically, the synapses are not vastly larger than the neurons, and there’s no way the concept of an ideal gas appears to work even as a metaphor. So I’m not convinced that the rejection of your ideas by an unfriendly peer review system is evidence that the “establishment” is wrong.
I have now replied:
First let me thank you for your critical comments – as the enemy of good science is confirmation bias – and what is needed to explore controvercial ideas is open no-holds barred debate on the issues. I have now posted a discussion draft “From the Neuron to Human Intelligence: Part 1: The ‘Ideal Brain’ Model” ( and have added a section on nomenclature specifically because you raised the subject. 
Now responding to your specific comments let me start by reminding you that I said “despite enormous efforts in many different specialist field, there is no theory which provides a viable evolutionary pathway between the activity of individual neurons and human intelligence.”

If you think this statement is wrong I would be very grateful for a reference to a paper which describes such a model. If you can’t provide evidence of such research why are you so hostile to the suggestion that someone thinks that they might have a possible answer?

For instance you introduce a straw man argument relating to the analogy between my “ideal brain” model and an “ideal gas.” Of course I would be a crank if I thought neurons were little balls bouncing around in the brain – as you are suggesting. The whole point of the “ideal gas” model is to strip everything down to the bare essentials. You start with an infinite brain filled with identical neurons (cf. An infinite container filled with identical molecules). Interactions between neurons are not by collisions but by electrical connections which carry signals of variable strength. (In theory every neuron is connected to every other one – but in the vast majority of cases the strength of the interaction is zero.) In an ideal gas the three properties of interest at pressure, volume and temperature, while in the ideal brain we are interested at the ability to store patterns, recognise them, and use them to make decisions. Another similarity is that both models work pretty well in some cases – for instance the ideal brain model suggests one reason why humans are prone to confirmation bias – and when the models start to fail the models can be used to explain the differences.

Your comment about switching between computation and talking abut the brain is interesting for two reasons.

Any research model which attempts to link the neurons to human intelligence will involve many different disciplines in fields such as psychology, childhood learning, animal behaviour, linguistics, artificial intelligence, and neuroscience, and in addition will undoubtedly involve modelling on a computer. I would argue that what is needed is the ability to stand back and be able to see the wood from the trees – and that have too much mental commitment in any one speciality could be a liability. You seem to be suggesting that neuroscientists are some kind of super-scientists who have a monopoly on holistic approaches to how the brain works.

However the comment is interesting because it pin-points the problem I have had. My ideas became trapped between a rock and a hard place. I worked as an information scientist (in the librarian sense) before entering the computer field and was used to seeing how people handled complex information processing tasks. I then moved to computers and concluded that there were serious flaws in the design of stored program computers – suggesting a fundamentally different model that reflected how people handled information. I could not get adequate support from the computer establishment because computers were so successful that there couldn’t be any serious flaw in their design, and even if there were problems there was so much money to be made ploughing on regardless that any time spent on blue-sky-research into work that questioned the ideas of people like Turing was a waste of time.

At the same time I was getting comments from other fields that I could not be modelling how people think because the standard computer model was wrong and as I was a computer scientist I must also be wrong! I am sure your critical comment was based on a stereotyped view that tars all computer scientists with the same brush.

Friday, 12 April 2013

TANTALIZE - the School Colours problem - and peer reviews

    "Tell Me, Professor Pinhole, which school does your daughter Alice go to?"
    "Let me think. Is it the one with the orange hat and the turquoise scarf? or with the khaki blazer and orange emblem? or with the pink blazer and orange scarf? or with the khaki scarf and pink emblem? or with the khaki hat and turquoise emblem? I fear I cannot recollect."
   "Good Heavens, Professor! However many schools are there?"
   "Just four and I have one daughter at each. Bess goes to St Gertrude's, Clare wears a turquoise hat and Debbie wears a khaki emblem. St Etheldreda's flaunts a pink scarf, St Faith's an orange blazer and St Ida's a pink hat."
   "And whose are those clothes flung down on the floor over there?"
   "The turquoise hat and the khaki blazer belong to different girls. As for the turquoise blazer, well, I think you might work out whose that is for yourself." 
Martin Hollis
(For solution see the paper on TANTALIZE)

In fact some of the work I did with the TANTALIZE package in the 1970s is relevant to the brain modelling work I am doing now - and a little of the history is relevant. 

In 1972 I started the work of implementing the second version of the CODIL interpreter on the 1903A computer at Brunel University with a view to concentrating of open-ended commercial and data base tasks once I had got the system up and running.  One day I had a discussion with a colleague, Roland Sleep, and he pointed out that while there was a lot of hype about Artificial Intelligence what was actually being done was comparatively simple - and he lent me a copy of a Ph.D. thesis on one of the leading problem solver packages. Within three days I had CODIL up and running the key examples in the thesis. I followed this up and used CODIL to implement a problem solving package which I called TANTALIZE - which, among other things solved the Tantalizer "brain teaser" puzzles for 15 consecutive weeks as they were published in the New Scientist. The first paper I wrote was TANTALIZE with included a number of examples of CODIL on its own and using the problem solver.

The reason for mention TANTALIZE now is that CODIL was not designed to be a programming language, but as it is designed to reflect the user's view of his information processing task it has to accommodate users who want to use it to "write programs". TANTALIZE is by far the biggest CODIL "programming" task written and can be considered as a sophisticated production rule system, written in, processing, and obeying production rules. The first phase is to ask the user a series of questions about the task, and also any general information on the type of task and the resources needed. The second phase turns the user input into a set of production rules and in some cases the package uses dynamic learning to sort the rules into a "most likely to succeed" order - which can lead to orders of magnitude reductions in the time needed in the third stage. The third stage take the optimised production rules and uses them to search the problem space and present the answer.

I continued solving problems with TANTALIZE but immediately ran into difficulty with the peer review system in getting A.I. papers accepted - so I simply switched to other application areas and dropped the work on heuristic problem solving. After all CODIL was not designed to handle small well-defined closed problems - but naturally t can do them because they are a subset of the bigger less well-define open-ended real world problems with which it is really concerned.

In retrospect it is interesting to look at why, for example, a paper was rejected as "Too theoretical - will never work" when I had reported in detail the way the package actually solved a wide range of problems. Or why I was told about another  that if I wanted to get papers accepted I should use the POP-2 programming language. A paper sent to a leading journal in the USA came back with two vitriolic reviews, one reviewer admitted to not understanding it, and there was one favourable review. I was so cheesed off by multiple rejections at this stage I just junked it and only some years later rediscovered the covering letter from the editor (who would have know who the reviews were) which ended with the advice that I should continue as he felt there must be something in it to have annoyed two of the reviewers so much.

Of course the real problem is that we are all trapped in the mental boxes we have constructed for ourselves during our lifetime and my mental box did not overlap with the mental boxes of the majority of the  A.I. establishment. For instance I approached the problem from the angle that there are many very complex open-ended problems - with no simple solutions - and to me the logic puzzles were a trivial artificial subset of the real world - where there were precise pre-defined rules and unique answers. The A.I. establishment at the time concentrated on applying formal mathematical models to closed tasks - such as game playing - in the belief that this was the way forward to modelling intelligence. My papers did not fit in as they were not expecting a solution coming from the area of open-ended and poorly defined tasks. Looking back it is clear that I was not really aware of how counter-intuitive some of my ideas were. I suspect that most genuine "outside the box" research has similar problems with peer review systems for both academic publication and research grants.
If you read the TANTALIZE paper earlier you will find the missing sections have now been added.

An account of the TANTALIZE package published in the New Scientist is below the break.

Monday, 8 April 2013

Speech-like vocalizations in our primate cousin - the Gelada

One of the difference between humans and other primates is our wide range of vocalizations - which we exploit in our use of  an extensive spoken language. Recent work by Thore J. Bergman of the University of Michigan on the Gelada (a baboon species) show that it makes a "wobble" like noise that has some has some similarity to human speech.  There are also some other references to lip-smacking as a means of signalling in primates.

Speech-like vocalized lipsmacking in geladas, Thore J. Bergman, Current Biology Vol 23 No 7

All Neurons are potentially "Mirror Neurons"

Commenting on the post "Mirror Neurons" and reflect Hatred" by Daisy Yuhas I wrote:

I am currently working on an “ideal brain” model (think of physical science's “ideal gas” model with neurons instead of molecules and links between neurons like collisions between molecules). One of the features of this model is that any network of neurons can work in two ways – recognising or doing – and the two roles are dynamically interchangeable. This suggests that there is not a class of “mirror neurons” because in theory all neurons can work in this dual manner.

Of course some activities are more relevant to recognition and some more relevant to doing, and there are some where it would be difficult to carry out the kinds of experiments that lead to scientist postulating that there was a special class of neuron.

What is interesting is that a basic feature of how neurons work could also be important in understanding how animals and people interact. The relevance of the mechanisms to empathy and social interactions is extremely interesting – but one must be careful to recognise that how much attention the mind gives to something will be affected by how relevant the activity is seen to be – and the observed levels of “mirror neuron” activities in experiments may be more related to motivation to pay attention than any specific factor in the part of the neural network being monitored.

What I find more interesting is that the ability of the mind to mirror what other people/animals are doing could be very relevant to some kinds of learning. Our brain sets up a neural pattern which recognises the activity and the tries to execute it to repeat the actions.

Looking for the "Neural Code"

Going through an older section of my email inbox I found a Scientific American link to John Hogan's blog post Do Big New Brain Projects make sense when we don't even know the “Neural Code” in which he wrote:
Neuroscientists have faith that the brain operates according to a “neural code,” rules or algorithms that transform physiological neural processes into perceptions, memories, emotions, decisions and other components of cognition. So far, however, the neural code remains elusive, to put it mildly
The neural code is often likened to the machine code that underpins the operating system of a digital computer. According to this analogy, neurons serve as switches, or transistors, absorbing and emitting electrochemical pulses, called action potentials or “spikes,” which resemble the basic units of information in digital computers.
I prepared the following, perhaps too lengthy, comment but when I came to post it I got a message that the page had been moved - and all attempts to find it resulted in irrelevant pages on the Scientific American web site. So I am posting my response below:
I would argue that the problem with modern brain research is an inability to see the wood for the trees. Of course if you look in detail at the brain things get very complex. But such complexity is common in science. The key idea underlying evolution is very simple – but when you look at individual cases in detail there can be enormous complexity. The same applied in medieval times when the movements of the known heavenly bodies appeared to be very complex – until it was realised that things became much simpler if you calculated the motions of the planets using the sun, rather than the earth, as a key reference point.

The problem with the human brain is that, as John Hogan says, we don't know the “Neural Code” and virtually everyone is looking at the problem in ever greater detail – apparently on the assumption that the harder you look at the fine detail the more certain you are to find out the shape of the wood!

I have been trying to stand back and get an overview and have come up with an “ideal brain” model which in some ways parallels the “ideal gas” model in physics. All neurons (like all gas molecules) are identical – and the dynamic links between neurons are like the dynamic collisions between gas particles. Using such a simple model it is possible to “grow” a brain that can remember and use more and more complex concepts – with the complexity of the most advanced concepts it can handle depending of the brain's capacity and time for learning. The model explains consciousness and can predict detailed observations about the brain - for instance the so-called “mirror cells” in the brain turn out to be nothing special as the observations simply reflect the way that all neurons work in the “ideal brain”. In addition it is possible to ask how human “intelligence” might evolve and this approach predicts a major tipping point (rather than some major genetic “improvement”) which produces and “explosion” of “new ideas” when “cultural intelligence” becomes a more effective tool than the innate biological “intelligence” of the “ideal brain” model.

The problem with the model, and possibly the reason why it appears not to have been explored before is that to a “culturally matured” mind (and all people accessing this text on the internet will be culturally mature) the model involves several counter intuitive steps.
  1. The model assumes that at the genetic level the only significant difference in the processing mechanisms between our brains and most animals relates to supercharging effects (more capacity, more links, more effective blood supply, etc.), and that if there is a difference the model actually suggests a reason why we might be genetically less intelligent that some other animals! Before you shout me down over this “outrageous claim” I should point out that the model suggests why culturally supported intelligence is infinitely more effective than the genetic intelligence foundation on its own.
  2. You have to forget everything you have learnt about computers and algorithms. The definition of a stored program computer model requires there to be a pre-defined model of the task to be performed. The “ideal brain” model starts by knowing nothing about anything and has no idea what kinds of tasks it will be required to carry out. Virtually all it does is store and compare patterns without having any idea what those patterns represent. Once you start looking in great detail at how specific name tasks are processed you have taken your eye off the ball - as you are asking about what the brain can learn to do - and not what the underlying task independent mechanism is.
  3. Everyone knows there can't be a simple model of a Neural Code – because with so many people are looking someone would have found it if it existed - so there is no point in looking ...
  4. My research has “reject” stamped in all the standard “Winner of the Science Rat Race” boxes. I make no secret that I am 75, am not currently associated with any established research group, and the only facilities I have are a P.C. in a back bedroom, access to the internet, and access to some old research notes on a long abandoned blue sky project which was trying to design a human friendly white box computer to replace the standard human hostile black box computer everyone takes for granted.
If you are interested I hope to have a detailed description of the “ideal brain” model on my blog later this month.