Tuesday 16 August 2016

The Problem with Big Data based Machine Intelligence

I have rather neglected this blog recently, partly because of other distractions, including a stimulating FutureLearn course on the Hobbit (Home floresiensis), and partly because I am concentrating on writing up my ideas about the evolution of human intelligence. However I have just come across an article by Gary Marcus and Ernest Davis entitled "Eight (No Nine!) Things wrong with Big Data" which is well worth reading.

The issue I have with the big data approach to machine intelligence is that it is tackling the problem in a very different way to the human brain,

If we think about the evolution of the brain it started very small and incrementally became bigger over millions of years. And for each animal, including humans, the brains start with knowing very nothing apart from some pre-programmed instincts and its knowledge increases incrementally through life. The economics of evolution involve optimising the use of resources to maximise survival which will set limits to the size of the brain and the amount of time spent learning. In effect small amounts of "data" is beautiful as long as there is enough to be cost effective in the battle for survival.

Big data applications involve using vast amounts of data which is already available in digital form, such as the case of the Google language translator which uses million of document texts in different languages (so the data collection cost per byte is extremely low) and applies powerful statistical processes of a kind which clearly are not inbuilt into the human brain.

Of course in many cases the big data approach is invaluable in that it can do things humans are not be capable of doing. The important thing to realise is that the techniques used in processing big data can tell us virtually nothing about how the human brain works.

 

Tuesday 9 August 2016

Where is AI going?


 "In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that its powers will be incalculable."

The above quotation appears at the start of an article "Will AI's bubble pop?" in the New Scientist of 16th July by Sally Adge. The significance of the quote was that it was made by one of the founding fathers of Artificial Intelligence, Marvin Minsky in 1970 - and it is quite clear that the prediction was wildly optimistic. The article goes on "When the chasm between Minsky's promise and reality sank in, the disappointment destroyed A.I. research for decades". One of the reasons given was that there was "a research monoculture focused on a technique called rule-based learning which tried to emulate basic human reasoning."

When, in the 1970's I was researching CODIL, a pattern-matching language that was based on observations about how clerks in a large commercial organisation thought about  sales contracts, almost all attempts to publish were blocked because the approach did not conform to the rule based monoculture, which was dominated by mathematicians implementing formal mathematical models.

As  a result of the article I have sent the following letter to the Editor of New Scientist, and if it is published I will add a comment below..

Re “Will AI’s bubble pop?” – News Scientist 16 July
A “blue sky” casualty of the AI monoculture of the 1970s described by Sally Adee (16 July. p16-7) was CODIL. This was a pattern recognizing language initially proposed in 1967 as a terminal interface for very large commercial systems. Later study showed CODIL could handle many very different tasks, such as solving New Scientist’s Tantalizers (21 August 1975, p438) and supporting an AI-based teaching package (New Scientist 24 Sept. 1987 p67). The “not invented here” reaction of the AI establishment contributed to the project’s demise.

I am currently reassessing the surviving research notes. In modern terminology CODIL was a highly recursive network language for exchanging messy real world information between the human user’s “neural net” and a “intelligent” robot assistant. CODIL’s versatility arose because it allowed tasks to dynamically morph from open-ended pattern recognition, via set processing to predefined rule-based operation. The experimental work concentrated on communication and decision making activities, but the inherent recursive architecture would support deep network learning.
It seems that CODIL mimicked human short term memory - an area where conventional AI has been singularly unsuccessful. In evolutionary terms the re-interpreted model suggests that early humans used an initially primitive language to transfer knowledge from one brain to another creating a cultural neural net now some 10,000 generations deep! A CODIL-like brain model would automatically show weaknesses such as confirmation bias and a tendency to believe the most charismatic leader without out questioning the accuracy of the information received.

Perhaps it is time to resurrect the project.

Having Trouble with Tantalizers?


During the 1970s I did a lot of work testing a heuristic problem solver called Tantalize - which was written in CODIL. The following news item was published in the New Scientist of 21 August 1975 and as I will be referencing this in the next blog post [insert link] I have decided to reproduce the original item.

For those who have trouble solving the Tantalizers that run each week in New Scientist, Dr Chris Reynolds of BruneI University has developed a computer programme.

How "Big Data" will try and put you into a Box

In 1967 I was involved in a design study which involved moving a magnetic tape based application onto direct access storage - and the file size was of the order of 100 megabytes - which was very "big data" fifty years ago. Things have changed and people are now talking of petabytes of data - and I decided to look into the current views on Big Data by logging into a FutureLean online course "Big Data: From Data to Decisions". The course is more about programming systems which use big data - which is not what I was looking for - but there has been some useful discussions on the impact of big data and how far it is can have the effect to feeding people's natural "confirmation bias" to selectively feed material which reinforces their views, and in effect make them more narrow-minded.