Could Artificial Morals and Emotions Make Robots Safer?

This past summer saw the release of the new film “Avengers: Age of Ultron.” Like so many recent movies, the villains in this one were once again killer robots. But the idea of deadly, weaponized robots isn’t just isolated to titillating movie plots. Such machines are already with us, in one form or another, in many places on the globe. The South Korean army has a robotic border guard—the Samsung Techwin security surveillance guard robots—that can automatically detect intruders and shoot them. Israel is building an armed drone—the Harop—that can choose its own targets. Lockheed Martin is building a missile that seeks and destroys enemy ships, and it can evade countermeasures too. Amid concerns about how these intelligent weapons decide whom or what to target, and about the looming possibility…

Link to Full Article: Could Artificial Morals and Emotions Make Robots Safer?

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!