Misconceptions and dark perspectives around Artificial Intelligence (part 3/4)

A responsible company is not solely about profits and should not overlook the side-effects of its operations. For that reason, we continue here by looking at some of the concerns around AI.

Dark perspectives of AI

The misconceptions about AI addressed in the first and second part of this article don’t mean that its non-true versions are safe. A nuclear bomb of old technology remains more dangerous than a “smart missile” of much less charge. Likewise, big raw processing power has the potential of big consequences, some of which may be negative. There is a number of dark perspectives about AI, which vary in their severity and their chances of getting materialized. Some of them refer to the present, like AI is today’s reality, but we will call them AI nevertheless, as they may still be relevant when true AI becomes a reality.

Perspective of unemployment

A well-known dark perspective considers humans as useless workers. Computers get ever stronger and able to process in a short amount of time the accumulated knowledge in electronic form of the whole humanity, while a human won’t be able to process more than a tiny fraction of that during his/her entire lifespan. The comparison is really unfair. Of course there are still many and important fields where computers have poor performance compared to humans (no matter many prophets were expecting them by now to have surpassed humans in all fields), but that trend is still rising. There are actual jobs where computers (whether in the form of hardware automation of robots or in the form of software automation of AI) have replaced human workers, to the same degree that cars have replaced horses as means of transportation. Still new jobs have appeared for humans, jobs directly related to the new technologies, and there is no clear sign that the revolution of information will harm the job market more than the industrial revolution did. That doesn’t mean that the transition will be smooth, but big-scale transitions were rarely smooth in History.

Demographics

In any case, if AI gets smart enough to seriously lower the demand for human workers, chances are that it will also think of a solution to that problem (although it doesn’t take a genius to think of some obvious solutions). And if that problem happens, it will just be another factor in what is known as demographic changes. Such changes pre-exist AI and have been a more realistic threat to social balances than the software that archives those changes and performs statistical analyses on them. Humanity will have to face that problem sooner or later, no matter what the impact of AI may be by that time, while AI may be more part of the solution than part of the problem. Still AI’s evolution is only one side of the coin of AI’s impact on employment.

Perspective of laziness

We begin this perspective with an analogy. The passenger automobile was invented to facilitate the movement of people over long distances. That easiness lead some people to use cars even for the shortest of distances, almost not walking any more, but instead gaining extra weight (along with other worsening of their health). The invention was intended to make the difficult easy, not to make the easy easier. Similarly, the pocket-calculator was invented to make boring repetitive calculations easier and faster. That easiness lead some people to use calculators even for the simplest arithmetic operations, not thinking any more about the values behind the numbers (e.g. about the consequences of spending). The invention was intended to make smart people more productive, not to make them dumber.

Smart-phones

The hardware part of the pocket-calculators has evolved into the so-called smart-phones. The software part (the potential one, because typical pocket-calculators are fully hard-wired) has been evolving into AI running on such devices. The other side of the coin mentioned in the Demographics paragraph is that the average user of smart-phones has been devolving into a dumb and consumable consumer (or sometimes recycler) of mostly trash information. It is important to notice that the dumbening of the users has been taking place long before any serious AI made it into some devices. AI has the potential to further facilitate that dumbening, but it is not able to actually cause it. Not exercising the mental part of the brain or generally following the “path of least resistance” is a pathology observed with every development of civilization that made life easier (like earlier-mentioned mechanical transportation). Irresponsible use of any device is the user’s fault (e.g. it would be irresponsible to author this article on a smart-phone).

Dangerous agents

We begin next perspective with a distinction. There are two types of dangerous agents: Evil agents and immature (or irresponsible) agents. Immature agents are dangerous in that they don’t realize the destructive consequences of their actions. Evil agents are dangerous in that they don’t care about or even are after destructive consequences. Both of them are as dangerous as the means available to them (this article stresses that point a lot). Actually, a baby playing near a missile launcher’s button that could accidentally cause a nuclear holocaust is more dangerous than ten genius scientists inside a prison’s cell conspiring on taking-over the world. Babies are generally kept away from dangerous buttons (and actual launch buttons work in safety combinations), and evil people are generally kept inside prison cells. From time to time, free people turn evil and bad things happen (like wars). From time to time, immature people find themselves in positions of high responsibility and accidents happen (even nuclear accidents). There are casualties, but life in general goes on. Still it is wise to understand the nature of the AI-agents and know what to expect (e.g. we should not imagine AI-agents as an army of humanoids).

Perspective of destruction

Let’s see now how that applies to AI. A non-autonomous AI is as dangerous as the human in control of it. An AI given autonomy can be potentially dangerous in itself, but no more than the danger that pre-exists in the means given to it. An AI in control of a nuclear missile launcher is dangerous because of the missile’s danger. An AI limited within a simulation is fully harmless. It can be limited not only in what it can do, but primarily in what it can know. An AI that is unaware of the physical world’s existence cannot threat it in any direct way. Immature AI is expected to be kept away from anything beyond its capacity, and evil AI is expected to be limited within video-games and away from the real world. From time to time, some dangerous AI may be released out “in the wild”, but life will most probably go on. A nuclear accident remains more dangerous than a planned AI attack. Still an AI attack can be considerably destructive and has more chances to happen than a nuclear accident. That will need one last part.

Leave a Reply

Your email address will not be published. Required fields are marked *