What Makes Artificial Intelligence Racist And Sexist?

Artificial intelligence is infiltrating our daily lives, with applications that curate your phone pics, manage your email, and translate text from any language into another. Google, Facebook, Apple and Microsoft are all heavily researching how to integrate AI into their major services. Soon you’ll likely interact with an AI (or its output) every time you pick up your phone. Should you trust it? Not always. Photo by Adrian Tombu AI can analyse data more quickly and accurately than humans, but it can also inherit our biases. To learn, it needs massive quantities of data, and the easiest way to find that data is to feed it text from the internet. But the internet contains some extremely biased language. A Stanford study found that an internet-trained AI associated stereotypically white names with positive words such as “love”, and black names with negative words such as “failure” and “cancer”. Luminoso Chief Science Officer Rob Speer oversees the open-source data set ConceptNet Numberbatch, which is used as a knowledge base for AI systems. He tested one of Numberbatch’s data sources and found obvious problems with their word associations. When fed the analogy question “Man is to woman as shopkeeper is to…” the system…


Link to Full Article: What Makes Artificial Intelligence Racist And Sexist?

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!