Transparent machine learning: How to create ‘clear-box’ AI

Image: iStockphoto.com/VOLODYMYR GRINKO The next big thing in AI may not be getting a machine to perform a task—it might be requiring the machine to communicate why it took that action. For instance, if a robot decides to take a certain route across a warehouse, or a driverless car turns left instead of right, how do we know why it made that decision? According to Manuela Veloso, professor of computer science at Carnegie Mellon University, explainable AI is essential to building trust in our systems. Veloso, who works with co-bots (collaborative robots), programs the machines to verbalize their decision process. “We need to be able to question why programs are doing what they do,” Veloso said. “If we don’t worry about the explanation, we won’t be able to trust the…


Link to Full Article: Transparent machine learning: How to create ‘clear-box’ AI

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!