AI Shouldn’t Believe Everything It Hears

Artificial intelligence can accurately identify objects in an image or recognize words uttered by a human, but its algorithms don’t work the same way as the human brain—and that means that they can be spoofed in ways that humans can’t. Recommended for You New Scientist reports that researchers from Bar-Ilan University in Israel and Facebook’s AI team have shown that it’s possible to subtly tweak audio clips so that a human understands them as normal but a voice-recognition AI hears something totally different. The approach works by adding a quiet layer of noise to a sound clip that contains distinctive patterns a neural network will associate with other words. The team applied its new algorithm, called Houdini, to a series of sound clips, which it then ran through Google Voice to have them transcribed. An example of an original sound clip read: Her bearing was graceful and animated she led her son by the hand and before her walked two maids with wax lights and silver candlesticks. When that original was passed through Google Voice it was transcribed as: The bearing was graceful an animated she let her son by the hand and before he walks two maids with wax lights and…


Link to Full Article: AI Shouldn’t Believe Everything It Hears

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!