No killer robots, please

Apparently, autonomous killer robots may be much more of a reality than anyone might have expected.

Sure, they have been around for decades — in movies. Who can forget HAL 9000 refusing to open the pod bay doors? Or the T-800 with the skin of Arnold Schwarzenegger telling that police officer that he’ll be back? Or how about the countless artificial intelligence systems that seem to think the best way to protect the planet is to wipe out mankind? If those AI’s are so smart, why can’t they think of an end-around to preserving the planet that does not include our subjugation or destruction?

And yet, at an international conference last month in Buenos Aires, a group of AI experts issued an open letter calling for governments not to build autonomous killer robots. The letter garnered nearly 3,000 signatures from experts in the AI field, as well as close to 17,000 additional endorsements from the likes of famed astrophysicist Stephen Hawking, SpaceX founder Elon Musk, Apple cofounder Steve Wozniak and many, many other very smart people in the technology field.

In the letter, the AI experts describe autonomous weapons — which sounds much more level-headed than “autonomous killer robots,” but they are one in the same if you ask me — as those capable of choosing and engaging targets without human control. That term “engaging” is perhaps a pleasant way of saying, “shooting and killing with laser beams, armor-piercing bullets or exploding projectiles like hellfire missiles.”

“Artificial Intelligence (AI) technology has reached a point where the deployment of such systems is — practically if not legally — feasible within years, not decades, and the stakes are high: autonomous weapons have been described as the third revolution in warfare, after gunpowder and nuclear arms,” the letter states.

We know from countless stories over recent years that the military is relying more and more on remote-controlled weapons. There are many benefits to this, primarily in that fewer soldiers are being put in harm’s way. The concern, though, is what happens when the brass is able to get rid of the human controls and let the AI do all the work.

In their letter, the AI experts said autonomous weapons will be cheap to mass produce and maintain, unlike nuclear weapons that require difficult-to-obtain materials and expensive storage facilities.

“Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity. There are many ways in which AI can make battlefields safer for humans, especially civilians, without creating new tools for killing people,” the letter states.

The inhumanity of war actually helps preserve peace. We do not send men and women to die on the battlefield unless there is absolutely no other option. If robots replace humans in the military, might we find ourselves engaging in war more frequently because we are weighing the cost of machinery rather than human lives?

There are plenty of other uses for autonomous robots that involve no killing. Think about C-3PO, that worrywart from “Star Wars” who only tried to serve humans and was “fluent in over 6 million forms of communication.” That is a talent that far exceeds the limits of the human mind, but for which a walking, talking and thinking computer could be programmed.  

Email Divilio at Follow him on Twitter @Daniel_KentNews.

Source: No killer robots, please

Via: Google Alerts for AI

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!