January 12, 2017
“We will lose the battle against machines” (Steve Wozniak, Apple co-founder).

The technological development will determine not only the model of social organization but will also raise ethical issues difficult to solve. Advances in Artificial Intelligence may generate new global challenges difficult to overcome.

There are increasing talk and new techniques put into practice that will have enormous consequences on people such as:

  • The “Transhumanism” engages in interdisciplinary approaches for overcoming human biological limitations through the fusion of the human body with machines.
  • The use of intelligent machines capable to replace people and not only in physical or repetitive work.
  • The possibility of genetically act upon people’s health even achieving to stop ageing.

It is not only a question of technological advances that can improve the life of mankind, but on top they can also generate many social changes (e.g. further ageing of the population) and even raise various ethical dilemmas (e.g. what is known as disparity: really expensive treatments for no ageing or even not dying, for the moment beyond the reach of most people until these treatments democratize).

To leave the solution of these dilemmas to the invisible hand of the market would produce an important biological gap among groups of people and could lead us, another time, to ideas that seemed already forgotten such as the “superman” (Yuval Noah).

In terms of artificial intelligence (AI), despite its undeniable advantages, we should not underestimate its potential dangers because of the possibility that, through self-learning, the computers themselves create massive decision making algorithms, more powerful than those of human beings (super intelligence), which, in theory, could lead those machines to have conflicting interests with us.

When Elon Musk founder of PayPal and Tesla, said that to develop Artificial Intelligence was to “summon the devil”, he referred to the fear, shared with other relevant people as Stephen Hawking or Bill Gates, of the risks of this technology on the future of humankind.

The Nobel Prize in Physics 2006, John Mather, said recently that “some people believe that they can control them (computers), but in fact it is like wishing to control time. It is something above and beyond us.” The crux of the matter is, in my opinion, in the response to the following issues:

  • Who will control these super intelligences? Will be the Governments, the companies or the citizens? For the citizens to be in control, it would be essential that AI develops in a democratic and transparent environment.
  • Which relations will be established between people and these super intelligences? The answer to this question is closely related to the previous point and the desirable would be to have relations of collaboration or ancillaries between both types of intelligence, and always under human control.

In short, whether technological singularity (expected by some scientists around year 2045) happens or not, it is clear that new technologies will change significantly, not only our lifestyle but also the current structures of social organization, which may represent a global (worldwide) challenge of crucial importance, difficult to control by local government structures.