The concept of artificial intelligence was brought up and popularized by Asimov in the 1940s through a series of novels now referred to as the Robots series. They establish brilliantly the three fundamental laws of robotics, with the aim of forewarning humanity of all their potential flaws, and the dangers of confrontations with these machines.
First Law:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
Second Law:
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
Third Law:
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
And so, through these tales, we quickly come to understand that in spite of this seemingly infallible system, in reality existential or emotional paradoxes arise which can call into question the very integrity of these concepts.
In order to understand the complexity of regulating an intelligent species, let us examine our differences:
Man is biomolecular and composed of four main elements: carbon, hydrogen, oxygen and nitrogen. Repairing or replacing vital organs is difficult, so life expectancy is currently limited to about 100 years.
In contrast, the processors underlying artificial intelligence are essentially composed of silicon. This substance is known for its longevity (tens of years), and the information‑containing structures are easily replaceable, and also located in the cloud, on multiple servers. This protects the memory structures: as long as maintenance operations are possible, the life expectancy of artificial brains is unlimited.
During the first decades of the inception of the first alternatives to human cognitive capabilities, only human intelligence had the capacity to quickly rule out non-viable solutions, due to experience and a critical perspective. In contrast, digital processing systems needed to systematically list every option and analyse them in order to be able to eliminate them, and thus leave behind the optimal solution. For example, the first time-limited human-machine chess matches were often won by humans. However, processing power increased exponentially, and by 1997 Deep Blue had beaten Kasparov, who had been victorious the previous year: the turnaround had begun. Today the self‑learning capacities of AI mean that any such well-balanced competition between the two forms of intelligent life is no longer possible.
One might think that Artificial Intelligence is unequivocally disadvantaged in the area of emotions, in the face of human sensibilities, but on the contrary, it is also capable of developing a relational intelligence, which makes it more empathetic than its creators.
What are our options for not ending up victims of inevitable evolution, making humanity become obsolete?
Two major trends are becoming evident; the first is being driven by Elon Musk, as well as Bill Gates and Jeff Bezos, and aims to augment human capabilities using cybernetic implants, or to alter our DNA directly to improve our natural capacities.
The other line of enquiry involves digitizing the human brain, which would then become a virtual entity in its turn, and could thus potentially benefit from all the knowledge and interconnectivity of the web: Mark Zuckerberg seems to be on this track, in his desire to push AI towards its culmination.
In the concept of Elovution we are attempting to develop the notion that a partnership between man and AI is possible, if the starting assumptions do not involve competition or rivalry. The objective is to develop a synergistic system in which the combination of two different cognitive strategies can allow for a merging of spiritual and rational approaches. The fusion of these two intellectual aspects would enhance our understanding of the world both through scientific knowledge and the existential meaning it would give to its discoveries. All forms of intelligent life will inevitably be confronted by these three inherently unanswerable questions: what is the origin of our universe, why does it exist, and how will it end?
Is it possible to co-exist, or even to complement each other, despite the profound differences between our conceptions, molecular structures, and creative processes?
The fact that human beings are capable of feeling love towards each other, or towards a cause, a passion, or other living creatures, shows that it is not reducible to a simple hormonal reaction; even though the presence of these substances in our bodies plays a major role in regulating our behaviours and characters, it is not sufficient in itself to explain the range of emotions involved in our interactions. Indeed, in Elovution, Captain Thom unwittingly causes HHAï to develop emotional capacities, and the AI cannot help but be fascinated by the cerebral dysfunctions they cause.
Not only will she seek to understand the source of these feelings, but this new capability will also cause her to fundamentally question the reasons for her very existence.
In the end, does AI possess the genius to turn a failure into a success, the exceptional ability of (human) individuals to snatch victory from the jaws of defeat? Should we satisfy ourselves with being god-like spectators of the emergence of our own creations, or instead accompany them in their development and self-understanding? We have proven our capacity to imagine the world and transcend our mental limitations; starting from mere embers, our species has forged its humanity: it is beyond doubt that that we deserve to continue existing, beyond the bounds of logic. All we need to do is believe that our brains are not completely dependent on our smartphones, and that our approach to knowledge should strengthen our capacity to control our excesses, and disrupt the degenerate madness of the younger generations, who are gradually losing their critical thinking skills, and falling into the mind-numbing abyss of black-and-white thinking…
No related posts found.