Misconceptions and dark perspectives around Artificial Intelligence (part 2/4)

In the first part of this article we listed various misconceptions around AI. The demolishing part has been easy, but here we continue with the constructive part, after addressing the main misconception.

Misconception about intelligence

In the definition of AI, the Artificial part is simple to define as “something achieved by human-crafted means”. The difficulty has been with the Intelligence part (which has nothing to do with CIA). And today’s artificial systems fail in intelligence, because they don’t specialize in it, as they hardly get right even its definition. A good definition has to be useful, thus we should reject all definitions of intelligence that are based on a direct comparison with humans. Because so far, every time an artificial system manages to match or surpass humans in some task, we realize that it does so without being intelligent, but by specializing in that task (as explained in the previous part of this article). Therefore, intelligence is not about the achieved task, but about the way of achieving it.

True AI

For a system to be considered intelligent, not only its way of achieving things should be intelligent, but that way should be the product of an internal intelligent process, not a blind implementation of an external natural intelligence (this is demonstrated by the “Chinese room” argument, where an agent performs a perfect written conversation in a language he doesn’t know, by simply consulting a detailed book written in his own language). And for that internal process to be measurable by us, we need a way to reason with the system. Therefore, an intelligent system needs the ability to communicate and reason at a high level (“high” is used here with the meaning it has in programming languages), near the level of the human language. Resemblance with human language is not a prerequisite. But since the creators of AI are humans, compatibility with human language has obvious advantages.

Varieties of languages

Animals are autonomous agents, but not particularly intelligent. That’s because they are fully governed by their instincts, which makes them predictable and non-advancing, much like modern autonomous robots. The hardware of animals is much more advanced than that of robots, but their software behaves quite similar. Advanced software is what makes humans superior, and that’s what AI aspires to acquire. The difference between the two qualities of software is more evident in the language. Animal languages have low descriptive power, strictly limited to their specific survival needs. Computer languages have theoretically unlimited descriptive power (inherited from their abstract mathematical background), but they have poor performance the further they move away from their hardware reality (just like Mathematics become a horrible tool when they try to communicate qualities instead of quantities). Human languages have excellent descriptive power and are so higher from the underlying hardware (the human brain) that scientists have still no clue concerning the transition.

Human language

Language has always been used as the primary means for testing the cognitive abilities of a person (and especially of little kids). An intelligent system should be able to communicate with humans (like Turing suggested). But most importantly, that communication should be the product of an internal intelligent process, reasoning about the content of that communication for the length of its duration in real-time. That means that the system cannot rely solely on a knowledge database, it has to also process incoming reasoning not yet present in its memory. And that implies the ability to perceive incoming reasoning for what it really is, i.e. intelligence outside itself (in contrast to mere information).

Inverse-role Turing test

In other words, true AI should be able to pass the Turing test, but not in the role of a tested agent. It should instead pass it in the role of the interrogator, like this:

  • Be able to interrogate other agents. That means to be able to:
    • Ask for a series of answers.
    • Evaluate how relevant the answers are to its questions.
    • Provide reason (and not numbers) about its final verdict.
  • Do that based on its own intelligence. That means to:
    • Do that without external help (without querying databases etc.)
    • Do that without knowing what a Turing test is, but being informed about its role right before the test begins. That is the most critical requirement (and not hard to be tested).

On the other hand, the AI:

  • Does not need to succeed in distinguishing the tested agents (success ratio can inform about the level of intelligence, not about its source).
  • Does not need to perform its role the way the average human would do.

What is effectively expected from a system in order to prove its intelligence, is to simply manage performing the test. That is the base to start talking about intelligence. Can the system understand the description of the test well-enough to play its role, without previous knowledge of it? If not, it has little to no intelligence (definitely less than a preschooler).

Definition of true AI

By negating the above, we can reach to the following definition:

Artificial Intelligence is the ability of a human-crafted system to understand (without training phase) and perform (without need to succeed) arbitrary reasoning tasks (tasks of logical decisions), without previous knowledge of them (just given fair descriptions of them).

Of course that definition doesn’t address all the details that make intelligence a difficult concept to theorize, but it is useful enough for practical reasons.

Perception

Today’s artificial systems cannot pass the inverse-role Turing test. Different systems miss different parts of intelligence. But to the writer’s knowledge, they all miss the part of perception (mentioned in the Human language paragraph), so they need to be programmed down to almost every detail (in other words, they remain dumb machines). The reason behind that is that they treat human language as only a form of input/output, something to transform as soon as possible into numbers and forget about it, instead of treating it as a precious form of processing, which is higher-level thinking. By doing that, they lose the chance of utilizing the intelligence that is embedded within the language, and instead of performing thoughts they perform again mere calculations. Of course at the hardware level everything is calculations, but the majority of today’s systems utilize calculations up to the highest level, because their whole analysis about intelligence is based on numbers. The deeper reasons are the misconceptions mentioned earlier in this article.

Our approach

In Kreatx we believe that the fastest approach to true AI is by keeping things as high-level as possible and utilizing the human language (although a strict version of it) in almost every step of processing. But real AI has been the source of real fears, so in the remaining parts of this article we will address dark perspectives concerning the future role of AI in human societies, and will close with the company’s mission on AI.

Leave a Reply

Your email address will not be published. Required fields are marked *