"In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that its powers will be incalculable."
The above quotation appears at the start of an article "Will AI's bubble pop?" in the New Scientist of 16th July by Sally Adge. The significance of the quote was that it was made by one of the founding fathers of Artificial Intelligence, Marvin Minsky in 1970 - and it is quite clear that the prediction was wildly optimistic. The article goes on "When the chasm between Minsky's promise and reality sank in, the disappointment destroyed A.I. research for decades". One of the reasons given was that there was "a research monoculture focused on a technique called rule-based learning which tried to emulate basic human reasoning."
When, in the 1970's I was researching CODIL, a pattern-matching language that was based on observations about how clerks in a large commercial organisation thought about sales contracts, almost all attempts to publish were blocked because the approach did not conform to the rule based monoculture, which was dominated by mathematicians implementing formal mathematical models.
As a result of the article I have sent the following letter to the Editor of New Scientist, and if it is published I will add a comment below..
Re “Will
AI’s bubble pop?” – News Scientist 16 July
A “blue sky” casualty of the AI monoculture
of the 1970s described by Sally Adee (16 July. p16-7) was CODIL. This was a pattern
recognizing language initially proposed in 1967 as a terminal interface for
very large commercial systems. Later study showed CODIL could handle many very
different tasks, such as solving New Scientist’s Tantalizers (21 August 1975, p438) and supporting an AI-based teaching package (New
Scientist 24 Sept. 1987 p67). The “not invented here” reaction of the
AI establishment contributed to the project’s demise.
I am currently reassessing the surviving research
notes. In modern terminology CODIL was a highly recursive network language for
exchanging messy real world information between the human user’s “neural net”
and a “intelligent” robot assistant. CODIL’s versatility arose because it allowed
tasks to dynamically morph from open-ended pattern recognition, via set processing
to predefined rule-based operation. The experimental work concentrated on
communication and decision making activities, but the inherent recursive architecture
would support deep network learning.
It seems that CODIL mimicked human short
term memory - an area where conventional AI has been singularly unsuccessful. In
evolutionary terms the re-interpreted model suggests that early humans used an
initially primitive language to transfer knowledge from one brain to another creating
a cultural neural net now some 10,000 generations deep! A CODIL-like brain model
would automatically show weaknesses such as confirmation bias and a tendency to
believe the most charismatic leader without out questioning the accuracy of the
information received.
Perhaps it is time to resurrect the
project.
No comments:
Post a Comment