AI way to hell

When we talk about Artificial Intelligence, it's often technology and technique. But to understand it, we should put ethics first.

by Leonardo Caffo

alt=
AI 04 July 2019

The best way to talk about Artificial Intelligence is by looking into its definition: too bad, though, that there is no unambiguous one. Theoretically speaking, AI imitates the paradigm of human intelligence, but it’s actually never the case not even in the most advanced experimental programs worldwide. This given, the only way is to take a step back, and look for a definition of intelligence tout court.

Human intelligence comes from the merging of two, separate elements, layered with the influence of external conditions: the union of the emotional sphere with the behavioral and cognitive one. Now, when humans began reasoning over machine intelligence, they focused exclusively on the cognitive side of it — which is why to date there is no machine with an emotive intelligence, experiencing, say, fear of death. Still, as confirmed by psychology, all these emotive components altogether affect the development and features of human intelligence. This places us at a crossroads: it implies we’re either somehow trying to build a super intelligence which has nothing to do with the human one, or generating a new form of alterity because we’re unable to deal with our own. Just, think of how popular robotic pets are in Japan. The reason behind this popularity is not technical, but social: in a society where people literally have no time to look after a real animal, an object you can turn on and off, with no instincts nor needs, becomes a sort of ‘surrogate alterity’. In such a scenario, technique is a form of social sublimation. 

Usually, when we talk about Artificial Intelligence we tend to make the opposite argument: what will this technology let me do that I couldn’t do before? However, to truly understand something about AI and about moral issues in general we need to reverse the factors, putting the ethical issue first, technique second, and the ethical issue generated by technique third. At the same time, we can reasonably think that big tech organizations are capable of instilling within society new needs that did not exist before. Meaning that technique and technology, if used a certain way, can end up shaping anthropology. Israeli historian Yuval Noah Harari states that technology is generating a polarization among society: on one side, you have a narrow élite owning the technology, on the other those developing it, and in the middle a mass of “unemployable ones” (read: the vast majority) who have been cut out from progress. Bringing this theory to the boil and taking the mass out of this equation, we may suppose that at some point there’ll be two kinds of hominids, as a result of a technology-driven species transformation. 

At that point, another issue will arise, just as it happened in the past with the nuclear bomb: that of ruling said technique which, being more powerful than politics itself, can’t fit the politics agenda. However, the history of thought teaches us a different lesson: Plato considered philosophers to be above the watchmen. In other words, it should be up to transnational organisms, such as the European Union, to rule on technique and technology. As technology gets more and more pervasive, the structures we have built our society on will have to be reconsidered: what will remain, then, of state organization as we know it? If, as many argue, technical evolution can’t be stopped, all we’ll have left is to reimagine what comes after democracy, as this will already have been, inevitably and irreparably, affected. This idea of the democratic model ending is nothing new: it first emerged in the 80’s with Gianni Vattimo’s theory of “transparent society”, which states that if society becomes a truly transparent, crystal-clear and enlightened, then democracy, being based on opacity, would inevitably fail.

But where does this dreaming of androids, and therefore of Artificial Intelligence, come from? The idea can be traced back to Leibnitz’s theory on universal language, which at the end of the day is nothing but logics and programming language. In strictly philosophical terms, if the aim of Artificial Intelligence is that of replicating human intelligence, then artificial intelligence can’t exist: we are therefore pursuing something that cannot happen, as what makes us human can’t be translated into machine language. This because in programming language we can, say, include the reaction to “sadness” by inserting an input (death) and an output (grieve), but we can’t include the whole range of emotions a person feels between input and output, between action and reaction. Why, then, are we chasing the android myth? As a matter of fact, human beings have within them this ongoing longing for the otherness, the elsewhere, so not to have to deal with what’s “here” already, what’s “now”. The search for alien life and the utopia of Mars colonization are just two examples of this longing for alterity. Humans have no competence in managing their own alterity and therefore looks for a different alterity to manage, putting out there a serious, clearly psychological problem. 

We are urged to ask ourselves what are we really trying to get here, knowing that the moment technology and AI will be fully developed we won’t be free anymore, as de facto technique is the expression of colonizing the non-technique. We’re always acting in the frame of Hobbes’ “social agreement” theory, according to which we trade rights in exchange for the chance of achieving specific goals through technology. The development of Artificial Intelligence will do nothing but amplify this phenomenon. 

Artificial Intelligence is a big bet, and it could be crucial socially speaking: we delegate power to machines which, unlike us humans, always get equations right and, paradoxically, what we’d have in return is a sadder, but more equal, society. On a different level, if we want to preserve democracy as we know it we’d have to set technology some limits. This is a real exclusive disjunction: “transparent society” means power to machines, “democratic society” means setting limits to technology. The human-technology relation, though, can’t be limited; we can’t think of saying “let’s just stop exploring”, we can at most decide to re-direction the creative and creation process towards different goals.

Given that technological change runs faster than moral progress it’s time to think of how to make them go at the same pace, or at least of to accelerate moral progress. The latter happening, among others, through the writing of a sort of constitution, a chart of inviolable rights enclosing both environments and its protection, given the damage technological advances are provoking, and human rights, which technology in the hands of a few risks violating. I’m inclined to think that the destiny we are doomed to is self-destruction, with a long, and definitely not random, ladder. Until now, each and every technological disruption implied a little damage piling up on the planet — one after the other, this process lead to the Anthropocene, causing repercussions on the environment that are now huge. 

Climatologists claim we have at most 12 to 14 years before the biosphere corruption process can’t be reversed anymore, and the next evolution in technology will probably do nothing but shrink this time frame. We have only a few years left to live with dignity on this planet. The AI race might have come from this too, from the dream of saving ourselves from imminent catastrophe, if not as bodies, at least as mental entities, as what’s material will end.