Racist algorithms: how Big Data makes bias seem objective

[embedded content] The Ford Foundation’s Michael Brennan discusses the many studies showing how algorithms can magnify bias — like the prevalence of police background check ads shown against searches for black names. What’s worse is the way that machine learning magnifies these problems. If an employer only hires young applicants, a machine learning algorithm will learn to screen out all older applicants without anyone having to tell it to do so. Worst of all is that the use of algorithms to accomplish this discrimination provides a veneer of objective respectability to racism, sexism and other forms of discrimination. I recently attended a meeting about some preliminary research on “predictive policing,” which uses these machine learning algorithms to allocate police resources to likely crime hotspots. The researchers at the Human Rights…

Link to Full Article: Racist algorithms: how Big Data makes bias seem objective

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!