Podcast: Why does Artificial Intelligence often turn out racist?

Between April and August, a small startup conducted an experiment – they used artificial intelligence to judge a beauty contest. Out of 44 winners, only one had dark skin. That’s because AI picks up the biases in the world around them – because that is what serves as the database with which they are programmed. There have been several other examples of AI going rogue – such as Microsoft’s chatbot Tay, who turned racist. He was programmed to talk like millennials and picked up their conversations through Twitter and other messaging apps. So was this a programming flaw, or a reflection of the things around him that he picked up? This episode of The Intersection speaks to the company about its experiment, what the results indicate and the need to…


Link to Full Article: Podcast: Why does Artificial Intelligence often turn out racist?

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!