From
the Neuron to Human Intelligence:
Part
1: The “Ideal Brain” Model
Christopher
F. Reynolds
Tring,
Herts, HP23 4HG, UK
Abstract
This discussion paper asks what are the minimal properties of an animal brain that can remember and recognise patterns and use the information to make simple decisions. It then shows how this could be modelled in an ideal brain (with analogies to the ideal gas of physical sciences) consisting of recursively interlinked identical neurons. It then shows how such a simple brain could carry out significant information processing by comparing it with earlier research into the possibility of building an intrinsically human-friendly “computer”. The model reflects a number of features of the human brain, such a confirmation bias, but it is also clear that as long it could not support the human level of mental activity on trial and error learning within a reasonable time frame
Part 2: Evolution and Learning (to be posted here shortly) will look at human evolution and suggest possible evolutionary routes for an ideal brain which lead to a tipping point in learning mechanisms which made a cultural explosion possible.
Table
of Contents
1 Introduction 2
2 Modelling the Ideal Brain 2
2.1 Modelling the
Connections – The Basic Mechanism. 2
2.2 Modelling the
Meaning – The Unknown Dictionary Model 3
2.3 Defining the
Memode 3
3 Some simple examples 4
3.1 Macbeth 4
3.2 The Peacock 5
3.3 I am Hungry 5
4 CODIL and the Brain Model. 6
4.1 The
Background to CODIL 6
4.2 How CODIL
differs from the Stored Computer Model 6
4.3 The parallel
between CODIL and Memodes 7
4.3.1 CODIL Items 7
4.3.2 CODIL Facts
and the Brain's Working Memory 8
4.3.3 Decision
Making and Learning. 9
4.3.4 Other CODIL
Features 9
5 Discussion 10
5.1 The Ideal
Brain is not a Real Brain 10
5.2 The Problem
of Learning 10
5.3 Formal Logic
and Confirmation Bias 11
5.4 Memory and
Uncertainty 11
5.5 A Note on
Nomenclature 11
6 Conclusion 12
1 Introduction
While there is a very considerable
amount of research being carried out, in many different disciplines,
relating to the human brain, how we learn, and how intelligence has
evolved, there appears to be no agreed neural code model to bind the
research together. The aim of the research described here is to
propose a possible draft framework which starts with the electrical
activities at the neuron level and provide a plausible evolutionary
path to explain the human intelligence phenomena. This paper
describes an “ideal brain” model which suggests how information
could be stored and processed in an simple animal brain which can
recognise and remember patterns, and use those patterns to make
simple decisions. Part 2 will look at how such an ideal brain model
could evolve to support what we now recognise as human intelligence.
In describing this model it is
important to realise that we see the world in terms of what might be
best described as an ocean of cultural intelligence and that when
this is stripped away to reveal the genetically determined brain, as
described here, a number of the features appear which many will find
counter-intuitive. It is also important to realise that this model is
intended to provide a framework for planning future research, and the
aim is promote discussion on how the many different approaches to
brain research might be brought together in a unified model of human
intelligence.
2 Modelling the Ideal Brain
Any
animal which can assess its environment and react appropriately must
have a brain that can learn patterns and use those patterns to
recognise objects in the environment. It should be able to use such
patterns to make decisions as to what actions need to be taken by the
animal. In addition it must work in a brain which consists of network
of interlinked neurons. The aim here is to construct a model which
will have these properties, is capable of non-trivial information
processing, and which could have evolved from simpler neuron
networks. Of course it is only an idealised model and the interesting
part is to find out how, and why, real brains differ from the model.
2.1 Modelling the Connections – The Basic Mechanism.
Think
for a moment about the model of an ideal gas in the physical
sciences. All molecules are deemed to be identical. They are moving
about at varying speeds in various and colliding with each other.
They are considered to be in an infinite container – so that one
can forget about the problems of colliding with the walls of the
container. Of course real gases deviate from this model, in various
ways but one of the values of the model is that it allows such
deviations to be identified and understood.
Now think
about an “ideal brain“. It contains and infinite number of
identical neurons. Neurons exchange signals via pathways of varying
carrying strengths. (The exact form of the signal is not part of the
model – but the important factor is that some pathways are more
effective than others.) . The strength of the pathway depends on the
amount of traffic along it, and if there is no traffic the strength
will drop to zero. The model assumes every neuron is linked to every
other neuron but in most cases there is no real link as the strength
of the connection is zero, which is the default start value. This
ensures that initially the brain is empty of information and
connections are only established once the brain is working and has
learnt something.
Between
any pair of neurons signals may pass in either direction but by
different mechanisms which will be referred to as “up” and
“down” signals . The strengths of the connection need not
be the same in both directions, and in some cases the connection may
only be one way.
A neuron
normally becomes active if the strength of all up signals it receives
from neurons “below” it exceeds a threshold – and once
active it sends up signals to the neurons “above” it. A
neuron which does not have an up link strong enough to pass on a
message will be referred to as a “top” neuron. “down”
signals pass in the opposite direction and will be discussed later.
In the initially empty
brain some of the neurons will have inputs from outside (in a real
brain from sense organs) and when they get a signal they will
initially be “top” neurons. Learning takes place when there are
several “top” neurons simultaneously firing. If neurons A, B, and
C are firing they will all make links with another neuron, let us say
M and from then on when A, B, and C fire they will fire to alert
neuron M – which becomes the “top” neuron. Similarly D, E, and
F firing together alert N as a top neuron. Now if M and N are active
“top” neurons at the same time they might make links to another
neuron Z – which now becomes the “top” neuron when M and N are
firing. This process is recursive and in the ideal brain there is no
maximum to the depth of nesting of linked neurons.
The important thing to
realise is that this is a learning process, in which the strength of
the links between neurons can increase with use, or fade away with
neglect and the brain will fill up with networks leading to higher
and yet higher level “top” neurons with time. As will be seen
later the time allocated to learning, the speed of learning, and the
depth of nesting of levels are critical to understanding the
evolution of intelligence.
2.2 Modelling the Meaning – The Unknown Dictionary Model
Imagine you have come across an
alien dictionary, written in an unknown language. Each page has a
word written at the top, and underneath there are definitions of that
word, written in the same language. You don't know what the word
means – but you can use its definition to work it out – except
that all the definitions are written in the same language – and
eventually you find yourself going round in circles getting nowhere.
Without any pictures or other guidelines you don't know what any of
the words mean. There may be useful information stored in the
dictionary but you don't have the key.
In fact the words at the top of each
page are, to you, simply arbitrary symbols, and you could simply
replace each word with the page number on which its definition
occurs, without destroying any of the meaning. Taking this one stage
further imagine each page to be replaced by a post and run a string
from each post to all the other posts which are referred to in the
definition. Having done this you take the posts away – leaving
knots where the posts were – and end up with a very tangled ball of
interconnected wires. It may look very untidy – but all the
original links are still there.
So let us take this dictionary model
and map it onto the ideal brain model defined above. Each knot
becomes a neuron and its definition is in terms of “down”
connections to lower level neurons – and the whole process is
recursive – going on to lower and lower levels. In addition most
neurons will have up connections with neurons which represent higher
(and recursively ever higher) concepts.
2.3 Defining the Memode
In
discussing the behaviour of the ideal brain and comparing it with
actual brains we need a handle to address the contents of this
amorphous network and I am calling this handle a memode.
Every
memode consists of a single “top” neuron, together with all the
“lower” neurons which can send it “up” signals and/or can
receive its “down” signals.
The following points
apply:
- The definition is recursive and the memode reaches down through an unspecified number of levels.
- Every top neuron can also be a lower neuron in a possibly large number of other memodes.
- A memode is active when the top neuron is active. In addition the lower neurons in the memode which triggered the activity will be active – but neurons in the memoded not involved in triggering the activity will remain passive.
- There is no limit on the depth of nesting of memodes or the complexity of the concepts a memode can represent. However there is no difference between neurons which represent a lower level concept and a high level concept as all neurons are identical.
- The definition is useful in discussing human thought and language – as it helps to relate each human concept to a particular memode.
- The meaning depends on the leaning history of the links between the lower memodes and will dynamically change with time.
- The memode is no more than a network of connections buried in a network of connections representing other memodes and there is no simple answer to the question as to where exactly information is stored. An active concept is just the sub-set of neurons which are active in a particular memode.
At this
stage it is appropriate to suggest the following links between the
ideal brain model and the human brain:
- The human working memory consists of the memodes that are currently active.
- At any one time there is a limit to the numbers of discrete memodes that can be active. (Compare Millers Magic Number Seven)
- Our conscious thoughts represent the concepts associated with the currently active memodes – including the active lower neurons in the memodes.
3 Some simple examples
3.1 Macbeth
Imagine you are watching
Shakespeare's play Macbeth, which you have never seen before. Your
brain has already identified the new concepts “Macbeth” and
“Duncan” and as Act II, Scene I, approaches it end a existing
concept “dagger” becomes active. Then comes the lines
I
go, and it is done; the bell invites me.
Hear it not, Duncan; for it is a knell
That summons thee to heaven or to hell.
Hear it not, Duncan; for it is a knell
That summons thee to heaven or to hell.
Another concept flashes into your
mind “murder” and the brain has active memodes for “Macbeth”,
“Duncan”, “dagger” and “murder.” The brain remembers this
event by creating a new, higher, memode above the currently active
memodes being the “down.”
For discussion purposes in these
posts the resulting memode will be related as
Death-of-DUNCAN
{Macbeth, Duncan, Dagger, Murder}
Of course these words are not
directly stored in the “top” neuron – they are simple used as
labels for the purpose of this discussion – and a mouse brain may
well have a memode which corresponds to
Cat
{Sight-of-cat, Sound-of-cat, Smell-of-cat}
while the part of the brain concerned with hearing
would have memodes linking phonemes, with similar memodes for sight
and other senses. Because all neurons and links are the same in the
ideal brain the model uses the same learning model at all levels and
all inputs.
3.2 The Peacock
What kinds of information will a human brain hold
about a peacock? One possibility would be
Peacock
{Sight-of-peacock, Sound-of-peacock, Word-for-peacock, ...}
Sight-of-peacock
{Looks-like-bird; Size-of-goose; Peacock-tail, ...}
Peacock-tail
{Looks-like-fan, Looks-like-eye, ...}
Word-for-peacock
{Sound-of-word-peacock, written-word-peacock}
When someone sees a peacock a signal
will pass up (starting with the neurons connected to the eyes) and
leave a trail of active neurons stopping at the top neuron in the
Peacock memode. The neurons
involved will vary depending on what the eyes can see – for
instance if the tail is folded down the Looks-like-eye
memode branch within the Peacock
memode would not be activated.
If the activated Peacock
memode is of lower priority to other simultaneously active memodes in
the brain's working memory, the activity in the memode will decay and
it will drop out of the working memory. Different parts of the
Peacock memode would be activated
if the human ears heard Sound-of-word-peacock
and this time some down links may be activated – perhaps via
Sight-of-Peacock, Peacock-tail,
and Looks-like-eye – and the
listener imagines he can see a peacock's tail. The important thing to
realise is that exactly the same neurons are used to recognise an
object and to remember it – recognition is through what I call the
“up” links, and recognition through the “down” links. This
also suggests that dreaming may simply involve visiting the down
links, exploring sound, smell, or sight images from the memory.
On another occasion the human might
be writing a letter and a sudden shriek from the garden activates the
Peacock memode – and down
signals are sent to the Word-for-peacock
and this is passed on to Written-word
and beyond – and the human writes the word “peacock”. This
involves the use of the down links to trigger activity.
3.3 I am Hungry
Now let us consider the type of memode that is
likely to be present in all animal brains in one form or another.
Want-to-eat
{Hungry, Food, Eat}
Hungry
represent a memode which becomes active if sufficient lower memodes
send a combined “up” signal, and when Hungry
is active Want-to-eat
automatically becomes active, but initially Food
and Eat are not active. So
Want-to-eat sends a “down”
signal to Food.
Food {Orange
{...}, Apple {...}, Banana {Sight-of-banana, Taste-of-banana}}
Effectively what has happened is
that the top neuron of the Food
memode has been asked if any of its lower memodes are active – and
if for example the Banana memode
is active, the Food memode
becomes active and Want-to-Eat
sends a down signal to the Eat
memode – which triggers the act of eating.
However if Banana
memode of not active it would send a down signal to the even lower
Sight-of-banana memode – which
might trigger the neurons responsible for eye to look around – and
if the result is still negative trigger the host animal to go and
look for the banana palm.
4 CODIL and the Brain Model.
If the “ideal brain”
model is going to be useful in explaining human intelligence we need
to understand how good it is at actually processing information, and
what its limitations are. At a first glance it looks very simple and
as it was only intended to be a minimal model capable of doing some
useful work in an animal's brain it is very easy to jump to the
assumption that it can only learn simple patterns and make simple
decisions. Fortunately there is a model available, with experimental
data, in the shape of an abandoned blue sky computer project. The
CODIL
project was exploring whether it was possible to design an
information processor which was human-friendly at the central
processor level. It is shown below that the system's language
(CODIL) can be mapped onto the “ideal brain” model, and while
CODIL could do (and was deliberately designed to do) some things
which the one would not expect of the “ideal brain” its is clear
that there is very considerable potential to handle some classes of
complex problems.
4.1 The Background to CODIL
CODIL
(COntext Dependent Information Language) was the symbolic assembly
language of a hypothetical non-Von Neumann information processor
which it was hoped would provided a human-friendly alternative to the
inherently human unfriendly inner workings of the stored program
computer. It arose from a design study carried out in 1966-67 into
the sales accounting systems of a very large oil marketing company,
Shell Mex
and BP, when they were looking at ways of transferring their
magnetic tape batch processing system (which handled about 250,000
customers and about 5,000 products) onto a system with online access.
The study suggested that, rather than try to pre-defined every
possible requirement in advance, it might be possible to build a
system where sales staff and the computer worked together using a
common two-way language – so that the computer could always explain
to the sales staff what it was doing and why.
This
approach was considered too revolutionary at the time but later in
1967 it was realised that the design could be generalised and, with
support from the UK computer pioneers John
Pinkerton and David
Caminer, a small
research team was set up within English
Electric Leo Marconi
and patents relating to the processor of a radically different kind
of computer were taken out. The pilot program set up to look into the
approach met all its targets (The
CODIL Language and its interpreter,
Computer Journal, 1971)
but unfortunately the project was closed down when the Research
Department was closed when the company International
Computers Limited was
created. The research continued on an unfunded basis at Brunel
University for some
years – and CODIL was shown to be able to support a wide
range of applications
including a very powerful Artificial Intelligence type problem solver
called TANTALIZE.
A logically powerful subset, MicroCODIL, was trial marketed as a
schools teaching package on the tiny (32K byte) BBC
Microcomputer (to
demonstrate that the approach did not require the a high powered
machine to implement)and received many
favourable reviews
but the the project was effectively abandoned (in part for medically
related reasons after a family suicide) two years before a detailed
definitive paper appeared in print (CODIL
– The Architecture of an Information Language, Computer Journal,
1990)
4.2 How CODIL differs from the Stored Computer Model
The
stored computer approach in many ways reflects the scientist's aim in
doing research. The goal is to create an algorithm or global theory
which will process/explain potentially very large collections of
formalised numerical data. This approach is particularly appropriate
to problems involving lengthy and/or sophisticated applications which
the average human brain would find impossibly difficult. The approach
will only handle a given task if someone (presumably human) has
created the algorithm/theory and thus represents a top down approach
to automatically processing information.
Because
the stored program computer originated performing numerical
calculations processing array of numbers (which can be addressed by
number) it is not surprising that at the hardware (or microcoded
processor) level it handles words containing numbers. These numbers
may represent actual numbers (in various formats), text relevant to
the task, addresses (including addresses of addresses and address
modifiers), program symbolic names, etc. The meanings any given word
depends on both context and history and this is one of the reasons
why computers are “Black boxes” where the user cannot look inside
(at the hardware level) and see what it is doing in understandable
human terms.
CODIL
takes a diametrically opposite approach, working from the bottom up
and will handle open ended tasks, potentially involving incomplete or
uncertain information, where it is not practical (or in some cases
possible) to pre-define a global model. It is particularly
appropriate where humans and “computers” have to work together
with uncertain and ever changing real world tasks. Instead of numbers
CODIL uses symbols where every symbol is the name a real world object
or of a set or a representation of a subset. In addition any symbol
may represent a list of symbols which provide an expanded definition.
Processing is by a remarkably simple but highly recursive decision
making unit which simply scans the systems memory comparing
symbols/sets and treating each symbol it finds as either “passive
data”, a condition (there is no explicit “IF” in the CODIL
language), or to modify the current context. Apart from a few
pre-defined symbols to control things such as the input and output of
information there are no explicit commands in the CODIL language. The
whole idea is to have a WSYIWYG type system where everything the
system does uses the human user's own symbols and these are
manipulated in such a way that the process is obvious to the human
user.
Of course
CODIL approach would become impossibly cumbersome and very slow if it
was used implement formal mathematical tasks involving sophisticated
mathematical operations of large arrays of well-structured data –
because it has no automatic way of addressing data held in
numerically defined storage locations. However the important thing to
realise is that CODIL was optimised to work symbiotically with humans
on open-ended tasks and complements, rather than replaces the way
that stored program computers are usually used.
4.3 The parallel between CODIL and Memodes
4.3.1 CODIL Items
CODIL was
designed as toolbox to allow people to process information with a
computer-like box and the system therefore contains features relating
to the hardware (for instance interacting with a keyboard) and such
features are clearly irrelevant to modelling the brain. In addition
it contained a number of “advanced” features – such as handling
numbers and word processing – which are clearly irrelevant to a
simple brain – and again these will be ignored in the following
discussion.
The basic
unit of information in CODIL is an item, which can be a single word,
or a word pair, and these are arranged in lists of lists. This means
that
Macbeth;
Duncan; Dagger; Murder.
is a valid list of
CODIL items, and this should be compared to the memode example
earlier.
Death-of-DUNCAN
{Macbeth, Duncan, Dagger, Murder}
In CODIL Death-of-DUNCAN
would be the name of a file containing Macbeth,
Duncan, Dagger and
Murder - but as
all filenames are automatically item names we have a recursive
structure where any item can be a list of items).
So in memode terms each CODIL item
is a symbolic name identifying a memode and a memode contains a list
of memodes, each of which are equivalent to CODIL items.
But CODIL items can also be word pairs such as
Murderer
= Macbeth; Victim = Duncan; Weapon = Dagger; Action = Murder.
At the
human level this is far more meaningful as, although it is simply a
list of item pairs, and contains no verbs, it can easily be related
to natural language statements, which is why most CODIL application
use item pairs – and why most CODIL files are immediately
meaningful to human users.
This can also be interpreted in
memode terms with Murderer
being the symbolic name of a lower level memode in the memode with
the symbolic name Macbeth.
CODIL can also have
items such as
Murderer
IS = Person
which in memode terms indicates that
any memode which contains Murderer
must also contain Person.
4.3.2 CODIL Facts and the Brain's Working Memory
At the
heart of the CODIL interpreter are a series of registers, each of
which contain an item, which describe the current context. The facts
may be generated from new input, by loading in remembered contexts –
so one can remember items that had been there earlier, and by making
deductions (discussed later). In addition the context defined by the
current Facts can be saved as part the systems knowledge base of
CODIL statements.
In mapping CODIL onto the memode
model the Facts are therefore symbols for the memodes which, at any
one time, are active and form part of the brain's conscious working
memory. The process of saving the contents of the Facts is the
equivalent to creating a new “top” memode. In addition loading a
CODIL file such as Death-of-Duncan
into the Facts will restore the values to the list of items Macbeth;
Duncan; Dagger; Murder.
This is
important in the brain model as allow the brain to remember past
contexts from the top down, rather than bottom up from current inputs
to the brain. This means that under some circumstances, when a memode
becomes active a “down” signal can activate the level of memodes
immediately below the “top” neuron. Such a mechanism is essential
id the brain to to be able to selectively remember the past and use
its recollection to guide its actions.
A key
question is the relationship between the number of CODIL Facts and
the size of the brain's working memory. At a very early stage of the
research into CODIL is was realised that if the system was to process
information is a way that the human user could understand the number
of the Facts registers had to very small – especially when compared
with the number of symbolic names that would be needed in a
conventional computer program of the same size. This was partly
because in CODIL the symbolic names represent the meaning of the
contents, and not the stored program computer address, and because it
turned out that the Facts rarely needed to contain much more than
half a dozen active items.
There is
an immediate and obvious difference between the default way in which
CODIL controls the number of active Facts and how the brain works.
The reason is that CODIL was designed to be a practical tool and the
first requirement is that the user is in control and the system must
never try to “trial and error” learn unless the user specifically
wants this to happen. The result is that CODIL has what can be
considered garbage collection procedures to ensure that Facts items
are automatically removed when they are no longer needed. Perhaps the
brain has something equivalent but it is more likely that the
activity of memodes in the working memory simply reduces with time,
and the least used ones simply drop out of the working memory, in
some cases being replaced by more active new memodes.
4.3.3 Decision Making and Learning.
The CODIL
decision making routine (once you ignore input and output) does
little more than compare statements from the knowledge base, item by
item, with the items in the Facts, and if it deduces an item is true
the item is moved to the Facts. This is equivalent to following links
between memodes and deducing that an inactive memode needs.
For instance the Facts
could contain
Murderer
= Macbeth; Victim = Duncan; Weapon = Dagger; Author = Shakespeare.
and a statement in the
knowledge base contains
Murderer
= Macbeth; Victim = Duncan; Action = Murder.
So the
items on the left are compared and the item Action
= Murder (on the right) is moved to the Facts.
Murderer = Macbeth; Victim = Duncan; Weapon
= Dagger;
Author = Shakespeare; Action = Murder.
However
it is important to realise that in CODIL the human user describes
information in terms of items and the way they are linked so one can
easily deduce that if the system knows about Macbeth and Duncan it
can deduce the fact of a murder. On the other hand if there has been
a murder the system should not automatically deduce that Macbeth did
it.
This
suggests that as soon as you try to apply the model to a real brain
there is a potential difficulty, as you introduce another level of
learning, as the brain not only needs to build a network of memodes
by some form of trial and error learning (building the up links) ,
but it also has to learn what deductions it can validly make between
memodes (building the down links). If learning involves a simple
trial and error type the amount of time spent learning get will
increase rapidly as the number of memodes increases – and could
well be a limiting factor in the maximum size of a real brain.
A study
of some of the CODIL applications highlights an even bigger learning
problem. The brain model says nothing about the order that things are
done in and this raises the question of serial and parallel
processing when applied to more complex tasks. The position is best
illustrated by the most complex test application implemented in
CODIL.
TANTALIZE was
a problem solving package which could find the answer to a wide range
of logic problems and actually solved 15 consecutive New Scientist
Tantalizers (now called Enigma) in a row. It illustrates the fact
that while CODIL makes no distinction between “program” and
“data” the user can actually use the language to write “programs”
- with files of CODIL statements being used as production rules. The
package worked in three stages. The first asked the user to describe
the problem. The second took the description and converted it to a
set of production rules,and the third stage was simply to obey the
generated production rules. In many cases a learning function built
into the CODIL interpreter was used to optimise the order in which
the production rules were executed as this often significantly
reduced the size of the problem space to be searched.
While I
would not claim that it is possible to map TANTALIZE onto the ideal
brain model there is sufficient to believe that the success of the
package demonstrates that the ideal brain model is capable of
supporting a range of logically complex information processing
problems. However it also highlights the problem that the CODIL
system was told how to do it – and the complexity of the solver is
such that it could not be learnt by trial and error.
4.3.4 Other CODIL Features
CODIL
contains a number of features which were included to make the system
usable for a wide range of task appropriate to an information
processor in today's world, but would be irrelevant to a simple
animal brain model, and so were not mentioned earlier. In some cases
the features described below were handled differently in different
version of the CODIL interpreter.
Handling
Numbers: The value part of a CODIL item – as in “Price
= 49.95” - could be a number and arithmetic expressions were
possible and in MircoCODIL the full range of facilities in the BBC
Micro Computer Basic were available. Because any CODIL item
represented a set or a subset items such as “Born > 1975” or
ranges could be handled routinely, making it easier to handle less
well defined information.
Lists:
Lists of members of a set were possible, as in “Colour = Red &
White & Blue.” Normally one would only be interested if the
list was true or false, but for some kinds of task a simple indexing
facility allowed one to distinguish between different items in the
list. Negation was also possible as in “Disease NOT Measles”.
(The relevance of negation is discussed later.)
Inexact
Matching: MicroCODIL included a series of
fuzzy/approximate matching facilities which replaced the True/False
decisions with a variable probability threshold. While the techniques
used are not directly appropriate to the ideal brain model they at
least indicate the approach will work in circumstances where exact
matching is inappropriate.
Learning
Functions: A limited number of experiments were carried
out where the number of times items in the Facts (the equivalent to
the ideal brain's working memory) were accessed were recorded, and
used to garbage collect items. In addition a working area of the
knowledge bases could be run with similar access records – with the
most frequently used information rising to the front and the least
used (in some cases) being dropped from the back. This was used to
great effect in the TANTALIZE package – which used the learning
facility to optimise the order by which rules were used.
5 Discussion
This
model raises many questions – and it is not practical to examine
them all here. So I have chosen some I feel are important, and will
address others on the blog
– the selection depending on which issues are raised by readers of
this discussion paper
5.1 The Ideal Brain is not a Real Brain
There are
obvious ways in which a real brain differs from the ideal brain –
and the most obvious is that different parts of the brain serve
different functions. Because there are well defined connections
between the different areas, and as they clearly exchange signals,
they almost certainly use the same “neural code”. In
evolutionary terms one would expect changes which optimised the ideal
brain model (and perhaps supplemented it) in a way which optimised
the particular activity being carried out, and one would expect the
biggest differences in the part of the brain dealing with critical
senses such as sight. It is not the purpose of this paper to
speculate on such differences – but rather to concentrate on the
generic parts of the brain relating to memory and decision making –
the frontal lobes in human beings.
5.2 The Problem of Learning
The ideal brain model is based on
the assumption that it starts with no effective connections between
neurons, and learns to recognise more and more complex patterns.. If
the amount of learning is small and the decisions take remain simple
simple trial and error learning would be quite adequate for an
unsophisticated animal brain. But in the real world learning takes
time and so an animal with a larger brain needs to take more time
(longer before maturity) to be loaded with the information it needs
to survive. As the brain handles more and more complex situations
the trial and error learning time starts to increase rapidly as there
one is now concerned with not only learning individual concepts –
but also the relationship between them. There is another learning
hurdle when considering what is needed to make tools on – when a
network which processes information in parallel has to learn a series
of activities which only work if all activities are learnt correctly
and in the right order!
I have not looked at a mathematical
model to see how trial and error learning times would increase with
the size of the brain and the complexity of the task – but it is
clear that the human brain couls not function on trial and error
learning alone.
Rather than discuss the issue
further here I have written a companion discussion paper “Fromthe Neuron to Human Intelligence: Part II: Evolution and Learning.”
This looks at a possible pathway, concentrating on the last 5-10
million years of human evolution, which identifies some key tipping
points that would explain the sudden emergence of a much more
intelligent being.
5.3 Formal Logic and Confirmation Bias
Any mathematician looking at the ideal brain model
will have notices a serious “flaw” which means that it is not
even a good model of primary school set theory! As described the
model cannot handle negation. This is quite deliberate as the aim is
to start with a minimal animal brain. A mouse does not need to learn
“IF food AND NO cat THEN eat” because the simpler high priority
“IF cat THEN run away” would rapidly take the mouse away from the
food before “IF food THEN eat”.
So at the animal level the feeble logic capability
of the ideal brain is not an obstacle – but it has interesting
effects when we consider human psychology. Learning involves matching
input patterns with remembered patterns and making decisions as a
result. But if something is not present it does not generate an input
pattern.
This means that the ideal brain model will, in the
words of the Harold Arlen song “Accentuate
the Positive – Eliminate the Negative – Latch onto the
Affirmative.” In fact the model predictes the well known feature of
human brains – confirmation
bias.
5.4 Memory and Uncertainty
An animal brain evolved
to allow information from the past to influence the animal's
behaviour in the present – and not to reminisce. When a memode in
the ideal brain is activated for any reason only parts of it will be
activated, and sometimes new links will be made. Useful information
is reinforced when it is used while unused information fades away and
is eventually lost so over time the memode will change. If you
reminisce the relevant memories in an ideal brain they will
automatically change because they have been activated, and the more
someone thinks about the past the more their memories will change.
This makes leaning the truth in a court difficult when many years
separate the event from the trila, and some of the wintesses have
been reliving traumatic memories.
It may well be (with an
analogy to the ideal gas model) that the Uncertainty Principle
applies to the brain – because in trying to record the information
in the brain the information changes.
5.5 A Note on Nomenclature
In terms
of other research it is important to point out that in the ideal
brain model all neurons are identical and each is the top of a memode
which is recursively defined as the sum of the definitions
associated with all the lower memodes which can send it signals. As
memodes overlap and the properties of links between neurons will
change as the brain learns, this this a deliberately fuzzy
definition.
In established research several terms are used to identify”different”
types of neuron. Grandmother
cells which react to specific complex external objects and
only to those objects and are equivalent to the top neuron in a
memode. But in the model all neurons are equivalent and represent
some external object of abstract concept – all neurons are
grandmother neurons unless some arbitrary measure of complexity is
applied. The same comments apply to the related idea of Concept
Cells.
In
the same way every neuron in the ideal brain model has the ability to
work as a Mirror
Neuron but in many cases
there will never be any call to act in this way – and in other
cases it would be difficult to carry out experiments to observe the
behaviour. While this means that, in the ideal brain model, there is
no special kind of mirror neuron, the concept is useful when it come
to discussing Evolution and Learning in Part 2.
6 Conclusion
There is no formal
conclusion because the ideas are put up here for open discussion and
debate. All I have done is to
note that many people seem to have been having difficulty in finding
some way of relating neurons to human intelligence. I thought a
project I worked on over 20 years ago might be relevant and mapped my
finding on that project into the ideal brain model.
Having done this I feel that could form a good starting point for debate on how neuron activity relates to human intelligence, but as a scientist I know that
ideas that look promising can lead to dead ends. Even if it turns out
that the model is flawed – discussing its strengths and limitations could
help others to come up with better models.
So if you like it say whyIf you are not sure whether it is relevant to your research – tell me about your research so I can suggest how it might fit in.
If you disagree your comments are even more valuable – as the key to good research is to discover the awkward questions.
And if you have any constructive ideas for exploring the concepts further – your contribution is very welcome.
No comments:
Post a Comment