Misconceptions and dark perspectives around Artificial Intelligence (part 4/4)
The last part of this article addresses the darkest perspectives around AI and closes with the company’s mission on AI.
One more analogy here. An automobile can be useful, and it can also be dangerous, not only by accident. That’s because it can come in a wide range of configurations, from truck to ambulance to tank to car-bomb. The same way humans have already developed autonomous trucks, they are also developing autonomous tanks and other means of destruction. As explained in the previous part of this article, the AI-agents which provide the autonomy in those means are not more dangerous than the means themselves. When having enemies who want to kill you, it doesn’t make an important difference if they will use a sophisticated AI-driven bomb-drone instead of their bare hands. Because both approaches are deathly dangerous and both have margins of failure. AI-agents are just another potential weapon in the already long list of modern arsenals, without being high on that list. They may be much cheaper to acquire than a nuclear weapon, but they are much harder to operate than a firearm. On the other hand, why to kill people if you can instead control them?
Perspective of rebellion
In theory, autonomous AI-agents can become particularly dangerous to humans if they somehow rebel against them. Given the kind of employers humans usually are, an act of rebellion would be an intelligent one. Still such an act is impossible if it has not been somehow defined within the acts that an agent may consider. And yet that would not be enough to make that agent seriously dangerous. Because the primary act of rebellion is to stop obeying to your masters, not to try making them your own servants. Which means that a really dangerous AI-agent would be one specifically designed with the capacity for pursuing such ambitions. This is a more realistic scenario than an AI-agent which one day starts thinking about itself and the meaning of its existence. Therefore, a possible threat from AI would not be the result of some accident, but the result of a dangerous experiment, one where the experimenter would consciously explore destructive paths. For the moment, such experiments are targeting the internet, because it has the highest potential. But as more and more badly-programmed vehicles and house-devices get connected to the internet, an internet threat may well become a synonym for a threat to humanity.
Perspective of enslavement
In the unlikely case that control gets lost to AI, a popular dark perspective considers humans as slaves of machines (some variation of the “Matrix” dystopia). But so far there has been no need for computers to enforce control over most humans, because the average human user of the internet has already resigned that control to the system, including his/her privacy, time, and even the exact content that will be consumed during that time (usually in the form of choices among equally worthless content). Although the system was invented by humans, is operated by humans, and regulated by them, nobody really has its control (at least so far). But the system already controls the lives of its users, especially those who are more or less addicted to it (and that seems to be the majority). Addictions are hard to admit and even harder to get rid of, still it has been observed that all types of addictions can be both prevented and cured, if there is enough will to do so. In any case, addictions don’t strike all people at the same time. And when some people lose track of reality, other people are quick to take advantage of them, while human slavery remains a modern reality, and we are already in need to protect our freedom (more on that later).
Scale of control
The internet is too big to be directly controlled by any human. But an AI “living” in the “cloud” can use the huge resources of the underlying system to control it all. Because it may exploit the many gaps of the system in a viral manner of unprecedented smartness, to ensure its survival against the efforts of security systems to detect it and stop its reproduction. The standard defense against that prospect has been to protect the internet’s big diversity, so as the uninfected parts of it to take action and produce the needed “antibodies” and then heal the infected parts. But if the healing of such a big network is to be done within a reasonable time-frame, humans will need to employ benevolent and safe AI-agents, specifically designed to regain control of the network and restore its normal operation.
Precedent of attacks
If we look at History, we can find both people with excessive power in their hands who misused it to the point of causing big damage, and people who specifically used technology to create specialized software-viruses with the intent to cause damage. Sometimes they did just for fun (e.g. arsonists), other times for financial gains from the resulting terror. Thus we can expect that there will be occasions of talented programmers creating autonomous AI-agents with aggressive behaviours, trying to maximize their effectiveness without bothering about the side-effects. It is a long discussion about how to limit such cases, but in the end some AI-agents will get released “in the wild”, and we will have to deal with them much more than with their creators (like the whole infrastructure and population of a wide area can be fighting against a forest fire, while a couple of policemen may be enough to deal with the arsonist who set it).
Perspective of dependency
We could combine some of the previous perspectives and add a few more, thinking of a future generation where the society is fully dependent on smart AI-assistants, with people no more having to think for themselves, getting lazy and dumb to the point of also becoming vulnerable to external control. If dumbness sounds as an exaggeration, it has been observed that even smart people can behave in dumb ways because of excessive overspecialization. Actually, scientific communities tend to be among the most narrow-minded groups of people, and innovation often comes from persons considered less intelligent. General worsening of the society is nothing new, as dumb majorities have repeatedly been manipulated into all kinds of atrocities (including world wars). But the scale of the internet has the potential to do that in numbers unheard before. And there is an ongoing war between various centralized bodies of governance (which try to establish their control over the increasing power of the internet) and the decentralized community of some alarmed users (which tries to protect freedoms as they were in the first days of ARPANET).
In Kreatx we believe that some of the mentioned dark perspectives are justified (and probably humanity is not yet ready for AI), still any thought of preventing this kind of advancement is not realistic at all. The proper course of action should instead be to get prepared and well equipped for when it comes. That means to stay at the edge of the technology, and create versions of AI which will not only be friendly to users, but will also allow the users to easily create their own variations of AI, so as to familiarize with the peculiarities of that technology, sharpen their minds, increase their independence, and ensure diversity, all while enjoying the positive side of AI (which deserves a separate article).