Quite rightly the debate is ramping up regarding the influences and dangers of artificial intelligence. In this article I try to give a ‘hype-free’ view of the realities of the impact and threats from the development artificial intelligence.
I am not an AI expert, however I have spent most of my career buried deeply in complex computer technologies and have implemented solutions using AI methods and techniques.
What is intelligence?
- We are the benchmark of intelligence because, as far as we know, there is nothing in the universe more intelligent than us.
- There is no agreed definition of what intelligence is. However it is reasonable to say that intelligence is a combination of many characteristics including, the ability to reason, plan, solve problems, think abstractly, comprehend complex ideas, learn quickly and learn from experience[1]. One could also include the influence of emotional states upon these cognitive characteristics and many others.
- Although we understand a lot about how the brain is structured and operates, we do not understand the process by which, for example, consciousness, self-awareness and emotional states develop. It is true to say that the brain is a complex system and the study of complexity science suggests that such characteristics may be emergent and beyond our ability to ever predict or understand.
- Being intelligent creatures it is reasonable to say that subjectively we know intelligence when we see it (when compared to ourselves).
What is artificial intelligence?
An artificial intelligent system is one that we create that shows one or more of the traits of intelligence to some level or another.
- In a normal computer system there are two primary components, these being the program and data. The program manipulates the data and the data may define what route the program takes through the code, however the program cannot change itself only the data that it accesses.
- In AI systems the program can still manipulate data but it can also see parts of the program as data and change that too, so the behaviour of the program can change in ways that we cannot predict from just looking at the original program or data.
- AI programs can also have ‘bias’ that can change with time. So instead of the program always doing the same thing with the same data it may be that other data or previous executions of the program have made it more or less likely that it will take a certain path through the code.
- Some AI programs use probabilistic or statistical methods to ‘bias’ the logical path that the program takes.
AI is sometimes categorised into weak or strong (deep) intelligence.
Weak AI
- Highly task orientated systems that do a very limited job, such as drive a car, identify a face, or clean your house. These systems cannot expand their abilities beyond those that have been instantiated in the programming. They adapt to their environment.
- They have to be taught what their response should be to certain inputs and then they can start to identify new rules and possibly change their behaviour to find patterns of behaviour that we cannot predict.
- Their behaviour is still fundamentally controlled by the program.
- It requires expert knowledge to develop an AI system including detailed knowledge of the task to be undertaken.
- The system’s behaviour is still a highly logical process.
Deep AI
- Very general system able to learn completely new skills and apply what they have learnt in one skill to improve their abilities in other skills. So an AI robot that can drive, and interact with people may use some of the experience of interacting with people to predict what a human driver is going to do in a given circumstance.
- They do not need a highly organised or thorough learning process.
- They have an emotional and moral dimension that can influence their actions and hence they are much less ‘logical’ in their responses.
- Their behaviour is not controlled by the initial programming or it diverges quickly away from it.
- They are aware of themselves and other intelligent systems as having individual characteristics and are able to use this knowledge to adapt when interacting with others.
Weak AI and Deep AI; an Example.
As an example let’s consider what the likely responses would be to from a weak AI system and a deep AI system (us) to a difficult social question. What should society do with highly dangerous convicted criminals who will never be released from prison?
Weak AI response
Logically it makes sense to exterminate all highly dangerous and convicted criminals because they cost the state a lot of money, they are of no use to society and they remain a danger to other humans. They have no value to society.
Versus
Deep AI response
Because we have an emotional and moral viewpoint and some may have faith in deities who teachings are against taking another life many humans are against killing other humans for any reasons. We may have ‘faith’ that they can be rehabilitated. This counterbalances the weak AI response.
What is the current state of AI?
- At the moment we are making significant advances in weak AI systems and it is likely that the advances will continue to strengthen such systems. Weak AI is better than us at identifying patterns in complex inputs and carrying out complex logical tasks such as driving (in some respects).
- We are not moving in any major way toward deep AI because it is likely that deep AI will require major leaps in our understanding both of the technology and what intelligence is.
- Weak AI still requires the people developing the system to understand in great detail the task to be accomplished. Therefore to develop deep AI using this approach we would have to understand how to program a level of all the deep human characteristics such as self-awareness, emotional states and moral viewpoints and this is not feasible.
- It is likely that to develop deep AI we may need to develop some new technology that is fundamentally simple from which deep intelligence can emerge due to the complex interaction of many simple parts.
So how dangerous is AI?
In weak AI systems the main danger is not that the machines will see us as some kind of threat or as irrelevant because that requires deep intelligence. The main danger is in the ability of the developers to program the system to learn effectively and to train it in all aspects of the behaviour of its environment. For example consider a driverless car driving down a road flanked by parked cars, behind the driverless car is another car and there is also a car approaching on the opposite side of the road; suddenly a football bounces into the road in front of the driverless car. If the car stops then it calculates that it will be hit by the car behind. If it takes avoiding action it may hit the oncoming car and if it continues it may hit a child that is running to fetch the ball; also if it gives control back to the driver they may not have time to take the appropriate action. The general rules for this situation need to be programmed by humans and they may get it wrong.
So the potential dangers of weak AI systems reside with the developers. This is likely to limit both their use and the tasks being undertaken. So although weak AI systems may present dangers to relatively small numbers of people they are very unlikely to threaten human civilisation.
The potential dangers from deep AI systems would be more concerning. They will have ‘minds of their own’ that may be completely unpredictable to us and to other AI systems. Even if such system have limited emotional or moral dimension then they can make logical decisions, like the example of what to do with prisoners, which could be detrimental to humans. However to be truly menacing such system would have to be large in number and highly mobile and self-sufficient. Otherwise we can always just pull the plug.
At any time we may have a ‘eureka moment’, that may accelerate the development of such systems, but this sort of system is likely to be many (many) decades away. In fact some experts believe that we will never have the abilities to develop real deep AI systems.
Why are we worrying about it now?
The discussion above suggests that we shouldn’t be too concerned about the ‘march of the machines’ causing the destruction of humankind, but we should be aware that weak AI systems could have faults that could cause injury or death. However it is worth starting to discuss the issues surrounding deep AI systems because at some point we may develop such a system and we should be aware how to address the issue raised both morally and technologically.
Recent Comments