It seems to me that the media are really hyping the state of AI research and the general, ‘march of the machines’ thing and this is causing confusion and mild panic amongst us. I think for all of us, including those actually, ‘doing AI’ it would be really good if there was an agreed, ‘scale’ against which all AI systems could be graded. If we had this then it would give researchers a basic road map to follow to create better AI systems and it would also allow the general public to put what we are being told by the media into a more realistic context.
I am not an AI expert, and maybe a little knowledge is a dangerous thing, but surely a group of such experts could come up with a generally agreed universal scale of AI evolution that graded from the simpler game playing systems through to, ‘the Terminator’ and beyond.
Just as an example to be shot down in flames I will present my own, ‘AI evolution scale’ and the analysis that produced it.
My scale is based upon assessing an AUI system in terms of some fundamental questions:
What can it do?
Can it just play chess, have a contextually meaningful conversation, diagnose a specific type of illness, or is it like us and can do hundreds of different tasks and functions.
How can it learn to be better?
Learning is critical to doing anything so can an AI system learn supervised or unsupervised, or learn from its own mistakes, or adapt its history of past events to help make decisions in the present.
How, ‘Human-like’ is it?
This question is all about self-awareness and emotional state. For example can the system identify emotions and change its response accordingly, has the system its own emotional state and can its emotional state change with influences and what it, ‘knows’ and who it interacts with.
How does it interact with the world?
This question looks at how mobile the system is and how it can sense the world around it in ways that affect its ‘human-like’ characteristics.
How good is it as a parent?
This question looks at whether the system can reproduce and how it relates to and supports its off-spring.
If a system has top marks to all of these questions it is highly human-like and possible more advanced then we are.
Below is the simplistic analysis for the grading system. (I apologise for the quality but WordPress is rubbish at importing tables)
My analysis suggests 17 general levels of AI that can be sub-categorised. Levels 1 to 9 cover what is often called, ‘Simple AI’, whereas levels 10 to 17 deal with the, ‘hard problem’ of self-awareness and conscious AI system coupled with advanced and independent mobility and reproduction.
My limited understanding of current AI research would suggest that the vast majority of current systems are levels 1 to 3 and highly advanced research may be taking us toward level 4 or 5. This simplistic scale shows just how far we are from the AI apocalypse.
If every AI system was assessed and assigned a number on such a , scale then it could reduce the amount of, ‘hype’ due to overenthusiastic and incompetent reporting by the media.
For real experts in AI I don’t see why such a categorisation system could not be developed quite quickly.
Recent Comments