Are you for or against robots? As long as they help and protect us, everything is fine. But if they start looking for noises, what will happen? Artificial intelligence makes much talk these days, yet the debate is not new: must we give machines the power to dominate us? Can we dispense with the comfort and ease of automation and its ultimate form: artificial intelligence?

 

The issue has been defying the chronicle for several decades, it is still not resolved. Will it ever be? This can be doubted in the near future.

Coming from Science Fiction, the theme of the rivalry of humans and robots is taking off. It is becoming a real object of reflection for philosophers engaged in contemporary society and concerned about the future of our species. Visionary novelist Isaac Asimov, one of the pillars of the 20th century SciFi, was the first to warned in the 1940s. Concerned about the place that the robots would take in the future and the threat they could represent for us, Asimov has enacted three laws to frame robotics.

These three laws were the fruit of discussions between Isaac Asimov and John Campbell on the theme of robots. They were quoted for the first time in 1942 in his short novel Runaround:

Law #1: A robot can not harm a human being or, while remaining passive, can not allow a human being to be exposed to danger;
Law #2: A robot must obey the orders given by a human being, unless such orders conflict with the first law;
Law #3: A robot must protect its existence as long as this protection does not conflict with the first or second law.
 
Subsequently, these laws have been subject to numerous developments and comments. Their object is perfectly understood: to protect our species against all hegemonic temptation of intelligent machines. Unfortunately, Asimov is not on the program of Westpoint. The military elites of this world have not read. We would like to see these rules sanctioned by the international community. We would love to see scientists and engineers responsible for designing and developing artificial intelligence all strictly compliant. We would be sure there would be no derogation to their application.

 


 

The opposite happened. It seems that the engineers, when working for any army, don’t give a shit about Asimov and his fucking rules. Of course, vacuum cleaner robots have not killed anyone by now. Maybe some kitchen robots have slashed some awkward fingers, but it was not in their programs, these bugs can be corrected. The truth is elsewhere. There is much more serious issue. I mean killer robot. “Hey, you’re kidding! There is no such robot except in Matrix!” Precisely: many things that exist in Matrix also exist on this level of reality – even if we do not know … or do not care. I tell you that killer robot exist. And you know them.
 
US Army has concocted an awesome toy, a remote death sower, which protects the US soldiers lives. A drone armed to the teeth, locating and erasing the target in the blink of an eye. How do you call it, if not a killer robot? It will be objected that the drone is not quite a robot: it is remotely controlled, it can not take the initiative to kill, the order must be given by a human being. This is true, but if the drone does not contradict the first law – “you will not kill a human” – its behavior is in fact opposed to the second law: “A robot must obey the orders given by a human being, unless such orders conflict with the first law”. We must face the facts. The fighting drones made in U.S.A. are indeed killer robots, their descendants could be the germ of our destruction. Other examples could be mentioned, which show terrible violations of the basic security principles imagined by Asimov.

This topic leads to another one: I mean artificial intelligence. Artificial Intelligence (AI) technology aims to create or simulate, in robots or software, an intelligence comparable to man or more specialized. Progress already allows automated autonomous cars like Google Cars, with a help or even a replacement of human decisions. So much so that scientists are worried about its possible misappropriation. (source)

“The development of artificial intelligence could mean the end of the human species.” The statement of the physicist Stephen Hawking at the BBC at the end of 2014 marked the spirits. Many found it excessive. Not all: Bill Gates, the founder of Microsoft, who can not be suspected of being refractory to progress, has confided to him to belong to the “camp of those who worry”. (source)Stéphane Lepoittevin, La Vie, hors-série Sciences : Bientôt immortels?

“Artificial intelligence is a scientific discipline looking for problem-solving methods with high logical or algorithmic complexity, and by extension it designates, in everyday language, devices imitating or replacing the human in certain implementations of its cognitive functions.” Its aims and development have always given rise to many interpretations, fantasies and anxieties expressed both in science fiction narratives and films and in philosophical essays. (source)

 

 

Nowadays SciFi authors aren’t the only ones to worry – far from it. Particularly rare fact: in January 2015, hundreds of scientists and business leaders published on the Future of Life Institute website an open letter calling for limiting the risks incurred by humanity as a result of machine development. The philosopher Roger-Pol Droit, for his part, is more measured. “There is, he said, as many reasons to hope as to worry.” A former member of the National Advisory Committee on Ethics, he remembers the stir caused in 1996 by the birth of the Dolly sheep, the first cloned mammal in history. “Everyone was then convinced that men were going to be cloned, and 20 years later, it was not achieved thanks to international agreements that refused, for ethical reasons, to manipulate the human genome.”

True. But this reluctance is over. Researchers just create first human embryos and pork mixed. (source)Huffington post Quebec, content removed Didn’t I tell you years ago we could be sons of pig? It is essential, according to Roger-Pol Droit, that the future of the human adventure should not be left in the hands of scientists. “This,” he said, “must be transparent and accountable to civic and civil responsibility.” (source)Stéphane Lepoittevin, La Vie, hors-série Sciences : Bientôt immortels?

Xavier Séguin

Recent Posts

The Sayhuite Monolith

This large carved stone poses a host of questions to which I will try to…

1 day ago

Our Black Origins

"Pharaonic Egypt is an African civilization, developed in Africa by Africans":

5 days ago

Fifteen Chakras

You know the seven chakras on the body of energy. What about the others?

1 week ago

Underground Cappadocia

Who dug these underground cities and what for?

2 weeks ago

Man Raising Women

"I have raised women! I have dared flames!" (Cahiers Ficelle, unpublished)

2 weeks ago

America Was Black

In 1312, the emperor of Mali return to America, the country of his long ago…

2 weeks ago