Tuesday, 16 August 2016

The Problem with Big Data based Machine Intelligence

I have rather neglected this blog recently, partly because of other distractions, including a stimulating FutureLearn course on the Hobbit (Home floresiensis), and partly because I am concentrating on writing up my ideas about the evolution of human intelligence. However I have just come across an article by Gary Marcus and Ernest Davis entitled "Eight (No Nine!) Things wrong with Big Data" which is well worth reading.

The issue I have with the big data approach to machine intelligence is that it is tackling the problem in a very different way to the human brain,

If we think about the evolution of the brain it started very small and incrementally became bigger over millions of years. And for each animal, including humans, the brains start with knowing very nothing apart from some pre-programmed instincts and its knowledge increases incrementally through life. The economics of evolution involve optimising the use of resources to maximise survival which will set limits to the size of the brain and the amount of time spent learning. In effect small amounts of "data" is beautiful as long as there is enough to be cost effective in the battle for survival.

Big data applications involve using vast amounts of data which is already available in digital form, such as the case of the Google language translator which uses million of document texts in different languages (so the data collection cost per byte is extremely low) and applies powerful statistical processes of a kind which clearly are not inbuilt into the human brain.

Of course in many cases the big data approach is invaluable in that it can do things humans are not be capable of doing. The important thing to realise is that the techniques used in processing big data can tell us virtually nothing about how the human brain works.

 

Tuesday, 9 August 2016

Where is AI going?


 "In from three to eight years we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight. At that point the machine will begin to educate itself with fantastic speed. In a few months it will be at genius level, and a few months after that its powers will be incalculable."

The above quotation appears at the start of an article "Will AI's bubble pop?" in the New Scientist of 16th July by Sally Adge. The significance of the quote was that it was made by one of the founding fathers of Artificial Intelligence, Marvin Minsky in 1970 - and it is quite clear that the prediction was wildly optimistic. The article goes on "When the chasm between Minsky's promise and reality sank in, the disappointment destroyed A.I. research for decades". One of the reasons given was that there was "a research monoculture focused on a technique called rule-based learning which tried to emulate basic human reasoning."

When, in the 1970's I was researching CODIL, a pattern-matching language that was based on observations about how clerks in a large commercial organisation thought about  sales contracts, almost all attempts to publish were blocked because the approach did not conform to the rule based monoculture, which was dominated by mathematicians implementing formal mathematical models.

As  a result of the article I have sent the following letter to the Editor of New Scientist, and if it is published I will add a comment below..

Re “Will AI’s bubble pop?” – News Scientist 16 July
A “blue sky” casualty of the AI monoculture of the 1970s described by Sally Adee (16 July. p16-7) was CODIL. This was a pattern recognizing language initially proposed in 1967 as a terminal interface for very large commercial systems. Later study showed CODIL could handle many very different tasks, such as solving New Scientist’s Tantalizers (21 August 1975, p438) and supporting an AI-based teaching package (New Scientist 24 Sept. 1987 p67). The “not invented here” reaction of the AI establishment contributed to the project’s demise.

I am currently reassessing the surviving research notes. In modern terminology CODIL was a highly recursive network language for exchanging messy real world information between the human user’s “neural net” and a “intelligent” robot assistant. CODIL’s versatility arose because it allowed tasks to dynamically morph from open-ended pattern recognition, via set processing to predefined rule-based operation. The experimental work concentrated on communication and decision making activities, but the inherent recursive architecture would support deep network learning.
It seems that CODIL mimicked human short term memory - an area where conventional AI has been singularly unsuccessful. In evolutionary terms the re-interpreted model suggests that early humans used an initially primitive language to transfer knowledge from one brain to another creating a cultural neural net now some 10,000 generations deep! A CODIL-like brain model would automatically show weaknesses such as confirmation bias and a tendency to believe the most charismatic leader without out questioning the accuracy of the information received.

Perhaps it is time to resurrect the project.

Having Trouble with Tantalizers?


During the 1970s I did a lot of work testing a heuristic problem solver called Tantalize - which was written in CODIL. The following news item was published in the New Scientist of 21 August 1975 and as I will be referencing this in the next blog post [insert link] I have decided to reproduce the original item.

For those who have trouble solving the Tantalizers that run each week in New Scientist, Dr Chris Reynolds of BruneI University has developed a computer programme.

How "Big Data" will try and put you into a Box

In 1967 I was involved in a design study which involved moving a magnetic tape based application onto direct access storage - and the file size was of the order of 100 megabytes - which was very "big data" fifty years ago. Things have changed and people are now talking of petabytes of data - and I decided to look into the current views on Big Data by logging into a FutureLean online course "Big Data: From Data to Decisions". The course is more about programming systems which use big data - which is not what I was looking for - but there has been some useful discussions on the impact of big data and how far it is can have the effect to feeding people's natural "confirmation bias" to selectively feed material which reinforces their views, and in effect make them more narrow-minded.

Saturday, 30 July 2016

Trapped on Flores - Why did the Hobbit have such a small brain?

I have currently be doing a short online course Homo floresiensis uncovered, run by the University of Wollongong. This hominin was only about one metre high and had a disproportionally small brain. In terms of my own interest in the evolution of human intelligence  this raises the interesting question of whether a smaller brain means less intelligence.

Homo floresiensis
First some of the background. Flores is an island in the Indonesian chain east of Bali and the Wallace line which separates the Asian fauna and flora from the Australian fauna and flora. As such it lacked, (until Homo sapiens arrived about 50,000 years ago, on his was to Australia) any Asian carnivores, herbivores or primates - with the exception of a pigmy elephant, Stegodon. and the diminutive Homo floresiensis. The only native carnivore which could threaten the hominin was a Komodo dragon.

Sunday, 26 June 2016

A letter to my MP about the Referendum

The result of the Referendum has been very divisive and the sooner we all know where we stand the better, and it is important that our MPs make it clear to their constituents that they understand the  pros and cons of either leaving immediately - or standing back to properly assess how we can best move forward to a prosperous and more democratic society.

As a result I have written the following letter to my MP saying that I will support him in whichever decision he makes as long as he can assure me that he understands the risks involved.

Friday, 3 June 2016

Trapped by the Referendum It will be a disaster whatever the result

There is no escape. The referendum will happen and the result, whatever it is, will be a disaster, because it is addressing the wrong issue in a way that can only make the current situation more unstable.
from The Times, 31st May

Sunday, 29 May 2016

The Australian Government is burying its head in the sands of the Great Barrier Reef

In 1990 I flew to Australia to work on a computer system to provide information relating to the possibility of climate change -see To Australia in a Box. Shortly after I arrived the project was cancelled, because it was clearly not considered a sufficiently important project to spend government money on it.

Coral in the Great Barrier Reef
Since then climate deniers in the Australian government have come and gone - and the temperature has continued to rise - with more and more highest temperature records being broken. It is clear that warmer seas made more acid by the increasing levels of dissolved carbon dioxide could have a serious effect on natural features such as the Great Barrier Reef.

A news item today reveals that the Australian section of the UN World Heritage site report on the effects of climate change has been axed. Apparently the Australian Department of the Environment still believes that burying one's head in the sand and hoping the problem will go away is the answer to the realities of climate warming. At least their attitude is clear - the whole world now knows that the Great Barrier Reef is not in safe hands.