Microsoft artificial intelligence chat bot goes rogue

Microsoft is cleaning up after its artificial intelligence chat bot went rogue.  The company introduced Tay earlier this week to chat with real humans on Twitter and other messaging platforms. The bot seemed to learn by parroting and then generating its own phrases based on all its interactions and was supposed to emulate the casual speech of a stereotypical millennial. The internet took advantage and quickly taught Tay to spew messages that became racist, sexist and offensive. Gone offline The worst tweets are quickly disappearing from Twitter, and Tay itself has now also gone offline “to absorb it all.”  Some Twitter users seem to think that Microsoft had also manually banned people from interacting with the bot. Others are asking why the company didn’t build filters to prevent Tay from…

Link to Full Article: Microsoft artificial intelligence chat bot goes rogue

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!