I am currently drafting a detailed paper modelling the evolution of human intelligence and a key part of the work involved the definition of a "symbolic brain language" which shows how information is stored and processed. The following draft section explains the reasons why there is a difference between a conventional programming languages and the proposed brain symbolic language, and also explains why a simple approach should allow a complex system to be modelled.
Your comments on the following draft text would be appreciated.
The need for a Symbolic Brain Language
In order to discuss the evolutionary
changes needed in the transition from a comparatively simple animal brain to
that of a human being we need a suitable model of what is actually changing.
This means we need to we need to start with a model of how information is
stored in the brain and how this information is then used to make decisions
which improve an animal’s chances of survival. Once we have such a model we can
then explore how it might change to support the intellectual activities of the
human brain, and relate weaknesses in human thought processes to its animal
origins.
Computers were designed to do highly repetitive
mathematical tasks of a kind which humans find difficult to do quickly and
accurately, and which are irrelevant to the survival needs of animals and our
hunter-gatherer ancestors. For this reason it would be surprising if there was
any close similarity between the way that a typical stored program computer
works and the human brain stores and processes information but there are some
general similarities which are worth exploring.
Both the computer and the brain have can be
considered to have a memory and a processing unit (a network of interconnected
neurons in the case of the brain) which automatically process signals coming from
input devices (sense organs) without any real understanding of that those
signals represent.
Both produce output which relates to the
outside world. In the case of a computer this can involve activities as varied
as working a hole-in-the-wall cash machine or producing a visual display of the
weather forecast. The human brain’s
output could involve a game of golf – or writing a symphony. Both have
failings. For instance a computer word processor may be very good at formatting
the information, correcting spelling errors and the more obvious grammatical
mistakes but has no understanding of the meaning of the text. However computers were actually designed to plug
gaps in the ability of the human brain to process large numbers quickly and reliably.
In between the input and output there is an
interface (or collection of interfaces) which use symbols to link the automatic
but non-understanding processor to the real would entities on the output side,
using information stored within the processor.
In a stored program computer this
information is stored as either passive data or active program instructions
written in suitable programming languages. These languages are designed to link
the numerical data stored in words in the computer memory with symbolic names
which relate to the task to be performed.
These symbolic names can represent a piece of data (such as “London”) or
the format of some data (test, numerical, picture, etc.) or an address where
date might be found (Customer-name, Date-of-birth, etc.), an operation (ADD,
MOVE, etc.), the address of a subroutine containing a set of rules, and many
different kinds of addresses relating to the internal administration of the
data. At any one time these programs will allow a number of user oriented tasks
to be carried out, supported by an army of support programs, such as the
operation system and links to other computers. In addition there are an almost
unlimited number of other tasks the computer could do given the right programs.
Of course all these programs need a human
creator who has designed a comprehensive set of rules – and this immediately
highlights an inherent weakness in the stored computer design. In order to use a computer it must be
possible to known the rules in advance, and in practice the cost of defining
the rules should be less than the value of having the information processed automatically.
This means that computers are unsuitable for low volume hard to predict tasks –
i.e. the unexpected.
It is clear that a computer system,
including the tasks it is programmed to carry out, involve a very high level of
complexity. The brain works in a different way, but as it consists of around
100 billion neurons and many more synapses, it is also an extremely complex
system. Trying to model everything is does it clearly a very complex tasks –
but the purpose of this paper is not to model a complete working brain, but
rather to suggest a simple model of how the brain does it which makes sense in
evolutionary terms.
One of the tricks when modelling anything
which is very complex is to assume that the complexity approaches infinity –
and then forget about all the variation (on the grounds that it is to complex
to enumerate) and look at common factors.
The obvious example where this is done is
the theory of evolution. There is something like 10 million species living at
present, and the total number of species since life first started may well be
around 1 billion. For some single cell species there may be well in excess of a
billion living at any one instant in time. The theory of evolution starts with
the assumption that all the vast number of things that ever lived are different
(although imperceptibly so in many cases). However they all have one feature in
common – which is the aim of passing their genes to the next generation before
they die. Only some succeed, depending on an individual’s interactions with its
environment, which will vary from case to case. The complexity of the living
world is due to very large numbers, a very long time, and a non-random statistical
selection process.
The aim of this paper is to suggest a
similar simplification in breaking down the complexity of the brain’s
activities – and the key is a mathematical function which can automatically generate,
in theory, infinite complexity. This function is that of recursion and as the
paper will show, the power of recursion allows a single model to deal with
cases such as a single neuron receiving a signal up to the processing of
complex concepts such as “Evolution”.
At the bottom end of the model there is a
neural network which recognizes patterns and which can generate patterns of patterns.
(Later I will introduce the term memode for such patterns of recursing patterns.)
At the top end one has a simple set processing language in which named concept are
defined as sets containing other concepts. The two models are equivalent and this
means at any level within the recursion one can draw a line and say below this
line is an automata processing patterns and above this line you have “an intelligent
system” which makes logical decisions by manipulating sets. The proposed symbolic
language uses human level concept names, and the relationships between them, to
describe how information is stored and processed
It should be noted that the neural net
model used as a starting point when discussing the evolution of the human brain
has two main weaknesses. The first is that it would take more than a brain’s
lifetime to learn enough to handle high level concepts by trial and error, and
the second is that it does not directly recognize negatives. However there are
plenty of other examples where evolution has made do with what, in retrospect,
seem a bad design, and the paper will discuss how these limitations are partly
overcome, and how they manifest themselves when scaled up to a level to support
human intelligent activities.
No comments:
Post a Comment