AI brains take a step closer to understanding speech just like humans

A thousand spoken words is worth a picture Machine learning researchers are on a mission to make machines understand speech directly from audio input, like humans do. At the Neural Information Processing Systems conference this week, researchers from Massachusetts Institute of Technology (MIT) demonstrated a new way to train computers to recognise speech without translating it to text first. The presentation was based on a paper written by researchers working at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). The rise of interest in deep learning has accelerated the performance of computer speech recognition. Computers can achieve lower word error rates than professional transcriptionists, but it requires intense training. Researchers have to label audio input with transcriptions containing the right text in order for the machines to match sounds to…


Link to Full Article: AI brains take a step closer to understanding speech just like humans

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!