Several years ago I posted Dartington Hall School and thinking outside the Educational Box and earlier this week I post the following to the privaate school Facebook page abot the hazzards of thinking outside the box.
One thing Dartington
taught me was that the establishment view was not always right.
But has it been a
good idea to ask awkward questions? Or would I have had an easier life If I had
forgotten what I learnt at Dartington and sheepishly gone along with the crowd?
Now in my 80s I am about to stick my head over the parapet yet again …..
At first sight the
question I asked some 50 years ago was one many people have asked at some time
or another. "Why are
computers unfriendly black boxes?" My problem started when I asked
the question with a difference and suggested as answer which did not support
what the computing establishment took for granted. The question I asked was:
"Why do computers need to
be
unfriendly black boxes?"
So what happened
(1) In 1967 I
observed that the problem with one of the biggest computerized sales accounting
systems in the UK was that the salesmen worked in a complex and ever changing
market place and didn't understand the very rigid and strictly pre-define
programs that process THEIR data. So I
suggest ed a way to design the system which they could understand.
SQUELCH - "Surely everyone knows sales
staff are so stupid that they can't understand computers and it needs very
clever and highly paid people (like my boss who was rejecting the idea) to
design computer systems. Of course I knew that the conventionally programmed
system my boss had helped to build was hard to understand - but he brushed
aside the idea that this meant we should build systems that the users did
understand.
(2) I moved to work
with a computer manufacturer on future systems - and within months two of the
UK computer pioneers had realised friendly computers were a good idea (this is
at a time when computer terminals for users were almost unheard of and the first
personal computer was a decade away.) I was made project leader doing research
on a transparent (rather than black box) computer with a user-friendly symbolic
assembly language. The idea was to build a system which explain what it was doing in terms the
user could understand. The initial tests looked very promising.
ZAP: The company was involved in a government
inspired merger and the department I was in was closed down. My project was
cancelled on the strength of two half page reports written by people who had
never contacted me, as project leader, to find out what the project was doing.
The supposed reason was that the idea was not part of the company's plans to
build a new and improved conventional computer - so there was no need to
consider unconventional ideas. I was made redundant.
(3) After a break I
had a chance to restart the research at university (salary paid but no
funding). I soon found that the user-friendly system I had initially designed
for a complex commercial environment could also solve the "Tantalize"
puzzles published in the New Scientist - in one case solving 15 in succession.
Surely a system which could help salesmen in a complex market AND solve logic
puzzles in a serious weekly publication must be relevant to artificial
intelligence.
BLAP: What happened was that paper after paper
was rejected by anonymous reviewers (presumably members of the artificial
intelligence establishment) . Typically the review would say "too
theoretical to ever work" when the paper included multiple examples of the
system actually working. One of the leading A.I. experts explained that I
should forget it as if my system was to be considered intelligent it must be
able to play chess well! I got so
depressed and abandoned this line of research because I felt blocked by a brick
wall of closed minds who were more interested in the games and logic problems
which appeal to mathematics undergraduates and who ignore the intelligence
needed to handle the complexities of real world tasks..
(4) I keep going by
concentrating on the education aspects of my work and by 1980 had my
unconventional software supporting classes of up to 125 students on terminals.
I started work on a version for use on schools (much delayed by family problems
ending in an unfortunate suicide) but in 1986 the schools package was being
trial marketed as MicroCODIL. It received enthusiastic reviews from
publications such as New Scientist, Times
Educational Supplement, Educational Computing, The Psychologist and many
hobby magazines. In addition I had a paper accepted for publication in the Computer Journal, the top UK publication in
the computer field. Surely this was an opportunity to take the research
further.
KERPLOP: Two things happened. A new professor
was appointed who came from the artificial intelligence area. He considered his
chief task was to impress the Vice Chancellor by exploiting Maggie Thatcher's
policy of getting rid of "deadwood" in universities. He was totally
uninterested in what I was doing except that it did not fit into his strongly
establishment oriented views and therefore I must be "deadwood."
Almost immediately the vicious verbal bullying started. Because I was weakened
by the PTSD after the family tragedy I mentally folded - and took early
retirement simply to escape - and the research terminated. It was no
consolation to discover that a few years later the professor was quietly given
a year to find a job elsewhere - after a union investigation had found how many
other lives he had disrupted.
I needed a change
and after a year working in Australia on climate change and environmental data
bases I returned to England and did voluntary work for the mentally ill
(including 6 years on Mind's Council of Management) and switched my research
interests to local history.
(5)
A few years ago I started to wonder what
I should do with the large pile of project papers. Earlier this year it was
decided that eventually they should go to the centre for Computing History at
Cambridge, as part of the LEO Computer Society Archives. In addition I have
been exploring the web to re-assess my 1967 ideas in terms of today's
technology. It seems that what I originally proposed can be considered to be a
language for transferring information between one neural net (the human brain)
and another (the computer). It can be related to Turing's idea of a "simpe
child brain" being trained by adults. I have just written and am
circulating a draft paper A Possible Evolutionary Neural Net Modelof Turing's Simple Child Brain
BOOM BOOM: I may be doing it again. Everyone
is talking about modern artificial
intelligence systems which use neural nets. I am sure billions have already
been spent on building and using them. Everyone knows that such systems need
very powerful computers - and vast quantities of data for training purposes. It
is also well known that, just like conventional computer systems, such systems
cannot explain what they are doing to humans
… for exactly the same reason - the rush to exploit human-unfriendly
technology to do admittedly useful things on a vast scale meant insufficient
time for asking if the foundations are solid..
The point I am
hoping to make (and which may again be overwhelmed by cries from the
establishment that "everyone does it differently") is that if you
want systems which can explain what they are doing you go back to first
principals and start by building a human-friendly foundation. I have started
with a system where the human tells the neural net how to link the nodes in a
human-friendly language so that the neural net accurately imitates what the
human already knows. This process can be related to Turing's 1950 idea that if
you want an intelligent system one should start with the idea of a simple child
brain, equipped with a notebook full of blank sheets, which are filled by
training by adult humans. This is an evolutionary chicken and egg solution - as
the adults know what they know because as children they were taught by adults.
The result is "deep learning" spread over thousands of generations
and possibly millions of years.
So I got that off my
chest. Was Dartington right to teaching me that the establishment was not always
right? And do any of you have any
Dartington inspired experiences of trying to promote anti-establishment
views?
If you are
interested in what I am currently doing see my blog An Evolutionary Model of Human Intelligence
No comments:
Post a Comment