Microsoft’s ‘racist’ chat bot Tay reveals dangers of AI, social media

WASHINGTON (Sinclair Broadcast Group) —Like many on social media, Twitter user Tay was super excited about National Puppy Day Wednesday. Unlike many, though, Tay is not human. She is an artificial intelligence chat bot designed by Microsoft to communicate with millennials, or as she puts it, “Microsoft’s A.I. fam from the internet that’s got zero chill!” Tay’s social media accounts went live on Wednesday morning. Within 24 hours, she was taken offline for “adjustments” after she began spouting racist comments, demands for genocide, and praise for Hitler. Tay was created using “relevant public data,” artificial intelligence, and editorial content developed by a staff that included improvisational comedians, according to Microsoft. The intent of the project was “to engage and entertain people where they connect with each other online through casual…

Link to Full Article: Microsoft’s ‘racist’ chat bot Tay reveals dangers of AI, social media

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!