One of the most common misconceptions about AI, which is reflected in many works of science fiction literature and cinematic art, is that AI research and development is a straightforward progression that will result in superior artificial superintelligence. However, the existence or feasibility of such a superintelligence is currently more a philosophical than a practical problem, because based on what we know now, it is far from certain that it can be created. The reality is that the currently used machine learning systems have a very narrow intelligence (also known as artificial narrow intelligence) and are only suitable for specific tasks. The next step would be artificial general intelligence with a level of intelligence similar to that of humans – at the moment this is still a future development, but research is underway.
The limits of artificial intelligence
That is, the machine-learning systems currently in use fall into the category of artificial narrow intelligence, which means that they cannot think with human intelligence. They are perfectly capable of certain tasks, because they are trained to do them, and can often perform them at a much higher level than humans, but they are not capable of solving tasks for which they have not been trained. An artificial intelligence called AlphaZero may have beaten Stockfish, the world’s best chess program, just hours after being taught the rules, but it will never be able to drive a car or write a newspaper article.
Is it all statistics?
Machine learning in general, and deep learning in particular, relies heavily on statistical methods, but this does not mean that artificial intelligence is purely about applying statistical models, as information theory, differential calculus, matrix algebra or operations research are equally indispensable tools for machine learning systems. Therefore, statistics is an important element in the functioning of artificial intelligence, but not the only one.
What can’t AI do?
Since currently existing AI cannot think with human intelligence, human control is needed to eliminate glitches due to technology, data, biased sampling or misuse. If the data fed in is not accurate enough, AI may not be able to distinguish between things that are obvious to a human. For example, a machine-learning system might put cappuccino and espresso in the same category because the difference between them is much smaller than between croissants and espresso. Artificial intelligence might see a correlation between the number of divorces and margarine consumption if the two trends are similar, while a human would immediately realise that it is just a coincidence. A human can recognize a STOP sign even if it is scribbled on or has a sticker on it, but an AI can be confused by it.
Because AI always works from input, a lot depends on the quality of the input and the quality of the training. The expression “garbage in – garbage out” means that if you train an AI model with garbage, it will inevitably throw garbage out. Unfortunately, there have been numerous examples in recent years of AI adopting the biases that it has been trained with input data. One of the most striking examples of this is the COMPAS system used in US courts to predict the likelihood of a defendant becoming a habitual offender. The algorithm predicted a higher than the real rate of recidivism for African-Americans and a lower risk for whites, which related to the racial biases of those selecting the data.
Neuron Solutions uses the latest advances in artificial intelligence research to find solutions to the challenges and problems you face!