Misconceptions and dark perspectives around Artificial Intelligence (part 1/4)

Kreatx is one of the companies currently investing in artificial intelligence (AI). A post about AI, even when not focusing on the AI’s potential benefits, could be treated as an advertising effort of justifying the company’s decision, but that’s not the case with this article. Its effort is instead to shed some light on the company’s current approach on AI. Most of the things mentioned in this article are not new, and its role is neither to criticize the approach of other researchers, but rather to explain why we opted for our own approach. That doesn’t mean that everyone in the company agrees with the statements made in this article, it just provides the theoretical background of the current approach. As its title implies, the article will attempt to sort-out various misconceptions around AI. The second part will also provide a useful definition of AI.

Misconception about hardware imitation

AI is not about imitation of the human brain’s hardware. Intelligence resides on the software part, while the used hardware just hosts that software on its memory (indifferent if that memory is read-write or read-only). No matter what kind of hardware will be used (whether original or emulated or simulated), it will be just another Turing-complete machine. A better hardware can improve performance (more on that later), but cannot provide intelligence by itself (which goes back to the old matter-versus-mind philosophical debate, but that is beyond the scope of this article).

Misconception about behavioural imitation

AI is neither about imitation of the human behaviour. Alan Turing successfully defined the minimum requirements of a universal computational system (the “universal Turing machine”), but his proposal for measuring the intelligence of such a system (the Turing test, where the system’s intelligence is measured on whether an interrogator can distinguish if it is a human or not) has long been proven to be a dead-end approach, practically sterile. Because there is nothing particularly intelligent in memorizing a long list of answers and ordering them in a way to simulate a dialogue. That leaves out the aspect of intelligence since the beginning, and most of it can be replaced by just a book containing the list (no matter the wisdom of its content, a book in itself has zero intelligence).

Misconceptions about the Turing test

The Turing test has been advertised as a way of proving the intelligence of a system, but it treats intelligence as merely a phenomenon. The proof goes that if a system is able to perform something resembling an intelligent conversation, then that system resembles an intelligent agent, so that system should be considered intelligent. The last step in that proof is illogical, because resemblance is not a reliable way of identification. The creators of the various chatbots (and of the so-called application-wizards) are fully aware that their algorithms don’t perform intelligent conversations, they merely use their trees of memorized patterns of answers to interface the conventional software behind them. All in all, the process in the Turing test doesn’t measure the intelligence of those systems, but merely their simulation skills. Such systems are not intelligent, the same way that flight-simulators don’t fly. A more relevant test is proposed in the second part of this article.

Misconception about processing power

Just like AI is not about capacity of knowledge, it is neither about processing power. There is nothing particularly intelligent in repeating an algorithm with high speed and high precision. Modern computers excel in both those measures, still that doesn’t make them any smarter than the humblest natural system. The opposite of intelligence can be defined as the hopeless repetition of the same mistake (something observable in many animals). Computers are well known for repeating the same mistakes with higher speed and higher precision (that is when they manage not to halt). And that won’t change by future architectures of even higher performance (quantum computing etc.)

Perspective of competition

A common perspective is that the ideal harmonious coexistence of two things requires them to have distinct complimentary roles. When AI is approached as an imitator of human intelligence (or as a superior version of that), it has the potential to become a dangerous competitor to that. In our days, this is mostly advertised in abstract board-games, where a software without emotions that makes no mistakes can dominate any tournament against imperfect opponents. But such software is basically a scaled-up version of a pocket-calculator. The thing that permits an application of calculator to perform mathematical operations faster than humans and without mistakes (although a user can still make mistakes with it), is its specialization. No matter it does that better than humans, the calculator is not considered intelligent, neither dangerous to humanity.

Misconception about learning and adaptation

AI is not even about learning (or training). If in the end all you do is possessing knowledge, it doesn’t matter much the way you acquired it. An advanced learning algorithm has many applications, still the intelligent part is about what to do with it. An ignorant 3-year-old human is more intelligent than a software that learns in a short period of self-training how to beat the human champions of finite-state games. In such games, a hypothetical database of every move can be a perfect player, all while being fully dumb. Finding ways to effectively create a compressed version of that database demonstrates the intelligence of the involved programmers (who may be more intelligent than all the players of those games), but not of the resulting system which blindly picks the best move every time. The quality of the move doesn’t demonstrate the quality of the player, since it could as well be picked randomly. We have to somehow reason with the player before rating its intelligence, a process which would quickly reveal that the specific software’s exceptional performance was not the product of an intelligent agent (just like the high performance of the pocket-calculator mentioned earlier).

Specialization

The above observation about game-playing software stands for every other application that is built on the principle of specialization. A software which is virtually perfect in what it does but stupid in everything else, is not intelligent at all and hardly a threat to humanity. Likewise, elephants are stronger than humans, leopards are more agile, dolphins are better swimmers, and many other creatures outperform humans in their specialized environments, but none of them is considered more intelligent than the 3-year-old human or anywhere close to threat human domination on the planet. And a modern computer that combines many different specializations within a single device, remains a dumb machine (more on that in the second part of this article).

Misconception about planet’s domination

Actually, before humans employed machines to work for them, they employed various kinds of animals, exactly to take advantage of their advanced performance. In the inverse perspective, bacteria could be considered as a more dominant species (in plenty of comparisons), still they are dumber and more predictable than all the mentioned creatures. A hungry leopard inside a house can be more deadly than an evil AI-agent inside a video-game, and brainless nuclear missiles remain a bigger threat than the specialized software that controls their launching. We could try making that software smarter than the brain in our heads, still the bigger danger will remain inside the head of each missile, because this is the most specialized one in causing destruction.

So what is true AI and is it really safe for humanity? Those questions will be answered in the remaining parts of this article.

Leave a Reply

Your email address will not be published. Required fields are marked *