Worried about amoral robots? Try reading them stories.

Why don’t we trust robots? After decades, engineers and scientists have tinkered and programmed humanoid robots to be eerily like us. But emotions and ethics remain just beyond their reach, the basis of our fears that, when push comes to shove, artificial intelligence won’t have our best interests at heart.But storybooks might fix that, a Georgia Institute of Technology team says. “There is no user manual for being human,” Dr. Mark O. Riedl and Dr. Brent Harrison, computer scientists at Georgia Tech, emphasize in their latest paper. Growing up, no one gives humans a comprehensive list of ‘dos’ and ‘do-nots’ to learn right from wrong; gradually, through examples and experience, most of people absorb their culture’s general values, and then try to apply them to new situations.Learning “unwritten rules” from a story…

Link to Full Article: Worried about amoral robots? Try reading them stories.

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!