Wikimedia tries AI to catch bad edits

The Wikimedia Foundation has a problem with new editors: the tools it created to help with quality control was hostile to newbies, and rejected too much of their work. A new artificial intelligence (AI) tool created for Wikipedia quality assessments is therefore designed to help participants assess whether their edits are likely to damage the article they’re working on – but instead of treating the AI as a black box, the foundation has open-sourced the software under the MIT license. As Aaron Halfaker and Dario Taraborelli write explaining the Objective Revision Evaluation Service (ORES), the idea is to train models against assessments of edits and articles currently made by humans. They want to reverse the ill-effects of tools like Huggle, STiki and ClueBot, which were good at helping maintain quality…

Link to Full Article: Wikimedia tries AI to catch bad edits

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!