Microsoft’s AI Twitter Bot That Went Racist Returns … for a Bit

Microsoft’s artificial intelligence program, Tay, reappeared on Twitter on Wednesday after being deactivated last week for posting offensive messages. However, the program once again went wrong and Tay’s account was set to private after it began repeating the same message over and over to other Twitter users. According to a Microsoft, the account was reactivated by accident during testing. “Tay remains offline while we make adjustments,” a spokesperson for the company told CNBC via email. “As part of testing, she was inadvertently activated on Twitter for a brief period of time.” Read More from CNBC: Microsoft Created a Twitter Bot. It Quickly Became a Racist Jerk Twitter users speculated the program was caught in a feedback loop where it was constantly replying to its own messages. Tay was first launched…

Link to Full Article: Microsoft’s AI Twitter Bot That Went Racist Returns … for a Bit

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!