MIT’s latest breakthrough? Getting AIs to explain their decisions

Knowing why a machine is making a certain decision becomes especially important in, say, critical situations involving autonomous vehicles. Image: Ford As humans grapple with ethical questions about what artificial intelligence should do in life-and-death situations, researchers at MIT have devised a way for machines to explain their decisions. The method, outlined in a new paper, could be as important for the adoption of artificial intelligence technologies as actual breakthroughs in AI enabled by deep learning and neural networks. As MIT points out, while neural networks can be trained to excel at a specified task, such as classifying data, researchers still don’t understand why some models work and others don’t. Neural nets are effectively black boxes. Not knowing why a neural net can identify animal types in an image might…


Link to Full Article: MIT’s latest breakthrough? Getting AIs to explain their decisions

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!