Friday, 26 April 2019

Philosophy & Technology --- and Computers

It is run by the University of Twente and promises 

"The course focuses on the relations between humans and technologies. You will learn how philosophy can help us understand the social implications of technologies."\

Clearly many of the problems I have had with CODIL arise because the philosophy of the approach I have taken - which is that humans live in a complex and uncertain world - is very different from the regimentally predefined computer systems in common use. 

I find such courses are a good way to stimulate new ideas and will post anything of note either as a comment to this post or as a separate blog post. Maybe I will even meet some of you on the course!


  1. There are many aspects of the existential philosophy of Karl Jaspers which are relevant to my own career.

    Between 1962 and 1965 I worked as a manual cog in an international R & D management information system which was very concerned with the real world problems faced by the company. I cam across the ideas of Vannavar Bush's 1946 paper "As we may think" and felt I needed to know more about computers so that I could help management to understand the messy and incomplete market place in which the company operated.

    So I moved to a nearby computer centre and found myself working on what was probably the most complicated sales accounting system implemented anywhere in the UK in 1965. I became very interested in why programs failed - and as I saw it the problems related to poor understanding and communications between the sales staff and the computer system. I assumed that management wanted a transparent system (rather than a mysterious black box system) with excellent two-way communication.

    Within a year, working for a computer manufacturer, I was project leader working on the design of a computer with a human-friendly symbolic assembly language.

    The big problem was not technical - it was in the underlying philosophy. The computer industry is built on the foundation that it must be possible to predefine the task while my approach started from the idea that real commercial problems existed in a complex and ever changing world - and any human interface should allow for a dynamic and changing requirement. As long as the established technology was commercially successful and expanding - why bother at looking at alternatives.

    What I hadn't realised was that computers gave management more control - and a tightly controlled system which satisfies 80% of the customers at a comparatively low cost was more cost-effective than a dynamically flexible non-computer system which attempts to serve 100% of the potential customers as if they were individual humans rather than a "statistically average human."

    The fact that we now have black box computers (rather than human-friendly transparent ones) means that everyone is now taught to think like a computer at school - but wouldn't it have been better is we had "taught" computers to understand us

    1. found the section in the Stanford Encyclopedia of Philosophy on the philosophical relationship between science and Technology very helpful.

      Many of the problems I had in my research into human-friendly computers (see other posts) can be linked to the fact that, temperamentally, I am a scientist rather than a technologist. As a student I ended up doing a Ph.D. in Theoretical Organic Chemistry (in the days before computers!) and this involved building mental models (some quite mathematically sophisticated) to try and explain observations of reality.

      Later I was faced with the problem that mistakes happened because sales people did not understand 1960s computers. I took the view (because no one told me it was supposed to be difficult) that if computers processed certain types of potentially complex information tasks in the same way as humans - the humans would understand the computers and vice versa.
      And if I model the way humans tackle a particular task I can implement that model on a computer
      So I proposed an initially rather simple model aimed at a particular commercial task.
      I then realised the model could be generalized to handle other different tasks. ... ...
      And here lies the problem - As a scientist my interest is in the models of "reality" that I build - and if someone said to me "Your model does not cover the xxxxxx situation" my reaction is to look at the weaknesses in my model and how it could be improved. If I had had a technology oriented manager there were plenty of tasks where the earlier model could be commercially exploited but my innate "scientifically orientated" mind kept me looking for more places where my model failed - so that I could come up with a different model.

      Now I am retired I have had the time to take a wider look and I believe my original model can be re-interpreted in terms of neural networks. In particular it suggests a possible pathway which could (as a scientist I am ever cautious) explain the evolution of human intelligence. If the model has technological implications in the design of computers and artificially intelligent systems so be it. I am happy to leave the matter for younger technologists to exploit.

    2. I have been having some problems which superficially relate to terminology - but appear to be much deeper. In discussing the philosophy of technology there are underlying assumptions about what it is to be human. For instance in describing Jasper's views it is said: "No attachment is possible to mass produced objects"
      My view is that the ability of humans to make attachments is an important part of being human - but there is a limit (effectively set by our brain) and human intelligence arises because we can share information (including tool and the means of making tools). Mass production means we can have more time to make mental attachments of an ever increasing number of thins (science, music, the visual arts, literature, FutureLearn courses, etc.) because we no longer have to form attachments to the more mundane aspects of everyday living.
      This not to deny the problems Jasper sees - but I see these problems as caused by the way society works, and modern technology is nothing new. Was the mass production of millions of stone blocks to build the Egyptian pyramids (possibly by slaves?) any more dehumanising than the machine driven tools of the Industrial Revolution and later?