Microsoft Apologizes for Tay, the neo-Nazi Twitter Chatbot

TayTweets’ Twitter photo.Microsoft has apologized for its chatbot “Tay” after it turned from a friendly artificial intelligence algorithm into an offensive Nazi-sympathizer in less than 24 hours. “We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay,” Microsoft research Corporate Vice President Peter Lee wrote on the company’s blog after the chatbot was taken offline. Hours after it was launched as an experiment in conversational understanding, Tay, a deep learning algorithm, started twitting offensive and racist comments, including several that were admiring of Adolf Hitler. Among the tweets were “Hitler did nothing wrong,” “Hitler was right I hate the jews.” Asked if the Holocaust happened, the chatbot replied: “It was made…

Link to Full Article: Microsoft Apologizes for Tay, the neo-Nazi Twitter Chatbot

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!