MIT’s New AI Can (Sort of) Fool Humans With Sound Effects

Getty ImagesNeural networks are already beating us at games, organizing our smartphone photos, and answering our emails. Eventually, they could be filling jobs in Hollywood. Over at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a team of six researchers created a machine-learning system that matches sound effects to video clips. Before you get too excited, the CSAIL algorithm can’t do its audio work on any old video, and the sound effects it produces are limited. For the project, CSAIL PhD student Andrew Owens and postgrad Phillip Isola recorded videos of themselves whacking a bunch of things with drumsticks: stumps, tables, chairs, puddles, banisters, dead leaves, the dirty ground. The team fed that initial batch of 1,000 videos through its AI algorithm. By analyzing the physical appearance of objects in…

Link to Full Article: MIT’s New AI Can (Sort of) Fool Humans With Sound Effects

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!