Science and Engineering

Why would it Matter if the Robots or Artificially Intelligent Machines Took Over ?

Of course robotic machines perform many tasks.
The sensing zone of intelligent
vehicles.
Driverless cars amass on the horizon, missiles guide themselves over the terrain to targets. Cars parallel park without human intervention and robots perform surgical tasks.

Robots, assisted by artificial intelligence, are undertaking some tasks that we are accustomed to being performed by people.

This can be worrying for those who feel they are denied work or feel that it is dangerous to rely on a robot, a computer to perform tasks that require a level of skill.

Professor Stephen Hawking has twice in recent, months expressed his concern that artificial intelligence will become a danger to people within 100 years (geek). I really don’t know if this is an accurate time scale, nor which of the various dangers facing the world is most serious.


The basic idea is that once machines get to the
The Robot from Terminator.
point where they are smarter than us and, more importantly, when they understand that they are smarter than us, what’s to keep them from surpassing us completely? It’s the stuff of sci-fi such as that portrayed in films like I, Robot, Terminator and The Matrix and by more comic book villains than we can count, although a robot could easily count them.


There is nothing new in the fear of automation. This was what the Tolpuddle martyrs and the Luddites protested against. Strictly the Tolpuddle martyrs were protesting against low rates of pay. Some smashed the new threshing machinery.

Robotics is the latest form of automation, like some aspects of computing, most notably artificial intelligence. The truth is that most robotic automation and artificial intelligence is more reliable than a person, especially when the task is repetitive and prescribed. Programs that perform simple medical diagnosis have existed for years; one famous program is internist, a diagnostic computer program for internal medicine. In industrial inspection computer vision systems have been shown to be more reliable than people who are easily distracted and become weary. Parallel parking a car can be reduced to a set of detailed instructions. Driving a car along an urban road can even be reduced to a set of relatively well prescribed instructions. Identifying components on a production line can be described to a computer using examples. We should not feel threatened by such software and machinery. Intelligent it may appear but it cannot readily learn how to perform a new task. There are experimental programmes that seek to reason by analogy in limited domains and computers can be programmed to display and respond to emotional states.

A truly intelligent computer or robot should be able to recognise beauty, identify the need for social intercourse and recognise the difference between good and evil. We have yet to see a omnputer or robotic system that is truely intelligent.

Yet more we might ask if people can reliably judge between good and evil.

In an alternative, and more emotive response Sean Miller on the AlterNet argues that any artificial intelligence is way behind even the most basic forms of natural “intelligence”. He also argues that Prof. Stephen Hawkins is stepping outside his realm of expertise when he comments on the danger of Artificial Intelligence. This might be true but it might also be that he has seen elements 
of artificial intelligence that many have not. Caution
Elon Musk Founder of PayPal,Space X and Tesla Motors.
is needed because other leading technologists are making similar comments. These include Elon Musk, 
founder of Tesla motors, aerospace manufacturer Space X and PayPal, and Steve Wozniak of Apple and many other high tech ventures. They include Lord Marin Rees (Cosmologist), Kathleen Richardson (anthropologist of Robotics) and Daniel Wolpert (Royal Research Society Professor in Engineering), reported in discussions at Cambridge University. Kathleen Richardson challenges the human aspects that we project onto robots and intelligent machines.

Steve Wozniak  of Apple and
many 
other hi-tech ventures.
Daniel Wolperts points to the programmer as the real force of evil behind supposedly intelligent or smart software. Lord Martin Rees is also cited in the Telegraph.


It is certainly true that a human mind is behind so called artificial intelligence even if a computer has generated the code or learned behaviour. It is also true that humans do tend to project anthropological concepts onto animals and machines personalising cars and ships for example. It seems unlikely that artificial intelligence will develop to the point where machines can reason with the same level of creativity that a person may poses. Whilst artificial intelligence can identify laws of physics and relationships in data this does not constitute true intelligence. In each case amprogrammer has designed the software to perform a particular task. It is very unlikely that computer programs we be designed in the forseeable future that are able to make moral judgements and outwit the creativity of the human mind. It may be that those who fear the rise of artificial intelligence do so because they are concerned that the artificial intelligence that they forsee does not have a moral compass. I maintain that the subtlety that precludes a moral compass also precludes other aspects of abstract thought and are why we should not expect the artificially intelligent robot to take over. The artificially intelligent robot will only be as clever as we have built it to be. The danger is that we do not design robots with sufficient care; that we forget to curb our own enthusiasm. It is not the artificially intelligent robot that is the problem but the human that programmes it.

The first problem is that we might believe that we can create an artificially intelligent robot. The first reaction of most technologists would be “Why is that a problem? Why shouldn’t we be creative?”  This might be possible but is it desirable? Do we understand what is needed to create such a machine? We might think we understand how to make a machine that is capable of exhibiting intelligent behaviour but how do we embody the machine with a conscience, a sense of purpose, a sense of right and wrong, with appropriate moral judgement. The problem is that we probably cannot define what constitutes conscience, a sense of purpose, a sense of right and wrong, an appropriate moral judgement.

From the perspective of faith as soon as we consider that we can define such concepts we are following the path of Adam and Eve. We are on the verge of adopting the position and understanding of God. To believe that we are able to make these appraisals and related decisions means that we believe that we are equal to God and so many ventures of social freedom have shown how little we understand. In some parts of the world it is considered appropriate to kill people for serious crimes. It is common in the western world for people to believe that they know what is best in the organisation of society and culture. Yet we are taking our planet into a cycle where it is difficult to deny that human action is causing major changes to the climate that we seem unable to manage.

This in my view is why it matters if the robots or artificially intelligent machines take over. It matters because it signifies that their creators have an inflated and unhealthy view of their importance in the world. This is what is dangerous. This the failure to recognise God and God’s omnipotence. This is what has beset "the modern age" of science from the eighteenth century. It is what is behind the modern notion that medical "science" can cure all our ills. As a result of this we can fail to prepare people for the end of life.

Many technologists, scientists, entrepreneurs and members of the public do not recognise that these matters to be a problem; they place technical knowledge and understanding above the need to respect the authority of God over the created world.

There is no salvation for the world until we are able to set our understanding and knowledge in a proper perspective with respect to the authority and love of God. Mankind was created to be a good steward of God's world. Jurgen Moltmann describes how God must for him be at the center of science and technology in Science and Wisdom, SCM Press (2003). The years have noit dulled the importance of the arguments made by Moltmann. 




January 2016