Microsoft chatbot is taught to swear on Twitter

Microsoft chatbot is taught to swear on Twitter By Jane Wakefield Technology reporter 24 March 2016 From the section Technology Image copyright Microsoft Image caption The AI was taught to talk like a teenager A chatbot developed by Microsoft has gone rogue on Twitter, swearing and making racist remarks and inflammatory political statements.The experimental AI, which learns from conversations, was designed to interact with 18-24-year-olds.Just 24 hours after artificial intelligence Tay was unleashed, Microsoft appeared to be editing some of its more inflammatory comments.The software firm said it was “making some adjustments”.”The AI chatbot Tay is a machine learning project, designed for human engagement. As it learns, some of its responses are inappropriate and indicative of the types of interactions some people are having with it. We’re making some adjustments…

Link to Full Article: Microsoft chatbot is taught to swear on Twitter

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!