Sunday, 14 October 2018

Will robots outsmart us? by the late Stephen Hawkins

There is a interesting article, "Will robots outsmart us?" in today's Sunday Times MagazineWhile I don't accept all Stephen's predictions I was most interested to read:
When an artificial intelligence (AI) becomes better than humans at AI design, so that it can recursively improve itself without human help, we may face an intelligence explosion that ultimately results in machines whose intelligence exceeds ours by more than ours exceeds that of snails. When that happens, we will need to ensure that the computers have goals aligned with ours.
Later he says:
In short, the advent of super-intelligent AI would be either the best or the worst thing ever to happen to humanity. The real risk with AI isn’t malice, but competence. A super-intelligent AI will be extremely good at accomplishing its goals, and if those goals aren’t aligned with ours we’re in trouble. You’re probably not an evil ant-hater who steps on ants out of malice, but if you’re in charge of a hydroelectric green-energy project and there’s an anthill in the region to be flooded, too bad for the ants. Let’s not place humanity in the position of those ants. 
Of course we know what happened last time a super-intelligence came into existence. About half a million years ago the Earth was populated by a great variety of animals of a comparatively low intelligence. All the higher animals had brains that worked in roughly the same way, and how much they could learn was limited because everything they learnt was lost when they died. Then one species, which we call Homo sapiens, discovered a way to recursively increase its own intelligence. It was already good at making tools but for several million years the cost of trail and error learning had limited what it could do. But then it invented a tool to boost intelligence, which we call language.  Language not only made it possible to make better tools, but also it made it possible to recursively build a better language generation by generation. So some 5000 generations later the Earth is home to a super-intelligent species ...

And are the goals of this species aligned with the the goals of the millions of other species? Of course not. Billions of animals are kept as slaves to be killed and eaten, while the homes of countless more have been, or are being, destroyed. 

If we invent a super-intelligent AI system why should it treat us with more respect than we have  shown for our animal relatives.

A new book "Brief Answers to the Big Questions," by Stephen Hawkins is published later this week
For the background to my observation see 

Sunday, 30 September 2018

How plans for a user-friendly computer were rubbished 50 years ago

In 1968 David Caminer and John Pinkerton (who were responsible for the world's first business computer, the LEO I, and who were directors of English Electric Computer) decided to fund research into a project to build inherently user-friendly computers and it was estimated that the market for such systems would be several hundred million pounds a year.
However, as a result of the government inspired merger to make the UK computer industry more competitive, ICL was created, and the project was closed down with no serious attempt to assess the successful research into CODIL which had already been carried out.

This was, of course, about 10 years before the first personal computers, and it is interesting to speculate what might have happened if the research had not been so rudely interrupted. Perhap the UK based idea would have been successful and the first personal computers would have been inherently friendly. This would have meant that there would have been no need for the hard to use MS-DOS operating system - and no one would have heard of Microsoft.

An account of what happened has been prepared for the archives of the LEO Computer Society
To see how the idea of a user-friendly computer originated read

In fact the research was restarted on an unfunded basis for a number of years and a recent reassessment suggests that the original proposal was actually modelling how the human brain works. For more inrormation see:

Friday, 31 August 2018

Are computers making too many decisions about us?


In today's Times Edward Lucus writes "Tech Giants must come clean with us - Too many decisions  about our careers, love-lives and credit-worthiness are being made by secretive online algorithms."

He is right to point out that the use of computers, particularly by very large of powerful companies, to make decisions which affect our lives needs watching - but on the online comments page I have pointed out that decisions made by humans may not be any more reliable. I wrote:
But are humans any more reliable? We all have biases and make generalizations which have little or no foundation in reality.
To give an example. Before I retired I worked in a university teaching Computer Science on a sandwich course basis - which meant that the department regularly had to find about 90 placements for students for "on the job" training with mainly local firms. Almost invariably the last 10 or 20 to be placed included a disproportionate number of students who either had foreign-sounding surnames or were not white anglo-saxon in appearance - irrespective of how well they were doing on the course.
One of my personal first year tutees, who had just failed an interview for a job working with a computer in a sales department with a small firm, asked me whether there might have been racial discrimination. What seems to have happened in the interview was that the computer manager realised that the student was no familiar with commercial English (for example the difference between "invoice" and "statement") - and probably assumed (wrongly) that as he looked foreign he did not understand English. While of course the manager might have been directly discriminating on the grounds of race it was far more likely that he had not realised how little the average 18 year old knew of commercial jargon, and jumped to an inappropriate conclusion.
A very different example, where I nearly acted on an inappropriate "racist" assumption. 50 years ago we lived in a small town where the population was almost exclusively white. We went to a family wedding in London, taking with us our 2 year old daughter. We took it for granted that on one side of the aisle nearly everyone would be of european origin - and on the other side nearly everyone would be of asian origin. As everyone was waiting for the bride (who was five minutes late) my daughter suddenly stood up in the pew and pointed towards the people on the other side of the aisle and shouted "Look Mummy, look." I looked to see where she was pointing to see what she had seen to make her get excited. All I could see was the crowd of asians and before I could grab her and put my hands over her mouth to stop her making a raciest comment she shouted out "There's Mary with baby Jesus." What was new and exciting to her was that she had never been in a Roman Catholic church before!
While I am concerned with "Black box" computers making decisions it is likely that those decisions reflect the biases of the programmers who designed the system OR are based on the statistical analysis of "Big Data" and are likely to be more reliable than a human. As I see it the problem is that the computer systems making the decisions are "black boxes" and cannot explain what it is doing in a way that those can understand.
(In any case, if someone make a racist comment to you - would you ask them why they said it - and would you really expect an honest reply in every case.)

Wednesday, 29 August 2018

Mission Statement

We are all, both individually and as a society, trapped by boxes - some physical, some mental. Some of these boxes are built from our childhood experiences, some from the customs and beliefs of the society in which we live, some are imposed by the technology we use, and ultimately we cannot escape the planet on which we live. 

The aim of this site is to look creatively at some of the issues involved and present them in an educational and educational way. So I will include posts about ways that technology (and particularly computers) affect our lives and also the social and political issues that limit our actions. In addition I plan to continue my Science Limericks and post pictures I find attractive or thought provoking under the heading "Trapped by the Camera."

The big change is that in future posts relevant to the evolution on human intelligence, and the computer language CODIL, will be posted on the blog "A Evolutionary Model of Human Intelligence"

Tuesday, 28 August 2018

Trapped by the Camera

Laying a new gravel path in Wendover Woods.
.I was attracted by the black and white pattern (faintly touched with colour) that was created as a new path between the trees was being constructed in Wendover Woods, near Chesham, Bucks.
[Significant changes are taking place in the central area in Wendover Woods. A large new car park has been opened, and the existing "Cafe in the Woods" is to be replaced with a much bigger cafe with a striking view over part of the Chiltern Hills.]
For pictures of the changes see my photographs on Geograph.

Monday, 27 August 2018

We live in a Wonderful World.

Libby Purves writes an article in today's Times "Aggressive Atheism denies Culture and History" and this attracted a lot of comments. My contributions included my reply to the idea that that if you didn't believe in god it took all the mystery out of life. I responded:

Recently I was walking in some National Trust woodland and decided to sit down and admire the view. I turned to someone sitting nearby and said how wonderful it was to be there and observe nature at work.
He replied "It's wonderful and its all Gods' work" and it was clear he had no idea how wonderful it really is, when seen by an atheist who understands science. To me nature is fully of partly explored mysteries and there is alway room for creative imagination in trying to understand the underlying science - and the evolutionary implications.
The ideas of looking at the wonders of nature and having only one answer "God did it" would seem boring, boring, boring to me. To him there was no mystery and no need to think creatively - one meaningless and unsupportable answer and you can sit back and let your mind stagnate.
Religious people who hide their lack of imaginative thinking behind a screen of ancient myths may find it satisfying - I am more interested in actively exploring the wonders of the real world."

Friday, 17 August 2018

Keep young by learning

Anyone who stops learning is old, whether at 20 or 80.
Anyone who keeps learning
is young
Henry Ford
I have just started on the FutureLearn course "Psychology and Mental Health" being run by the University of Liverpool and decided I would mention it on this blog - because not only is it interesting - but I find it personally invigorating to be in a learning situation interacting with lots of other students, of very different backgrounds.

The first thing I did was to select the above quotation - and almost immediately I got a new email - it was the British Psychological Society Research Digest.
And what was the headline article - a blog post "Do people with a high IQ age more slowly." The blog relates to a paper behind a pay wall which I can't access entitled "Higher IQ in adolescence is related to a younger subjective age in later life: Findings from the Wisconsin Longditudinal Study."
I particularly like the observation "Perhaps a higher IQ, which helps us to process complex information more easily, also increases our curiosity about the world, and it’s that sense of wonder and excitement that can make us feel more youthful."
This really sums up why I enjoy doing FutureLearn courses and while, at 80, I am still actively interested in research. If I ever loose my sense of curiosity  or fail to get excited when I learn something new I am sure I would loose the will to live.