Google DeepMind Artificial Intelligence Learns to Talk

Miles Willis/Getty Images by Amanda Lee9 Sep 20160 9 Sep, 20169 Sep, 2016 SIGN UP FOR OUR NEWSLETTER Using an “artificial brain,” Google DeepMind researchers have developed a new voice synthesizing technique they claim is at least 50% closer to real human speech than current text-to-speech (TTS) systems in both US English and Mandarin Chinese. The system, known as WaveNet, is able to generate human speech by forming individual sound waves that are used in a human voice. Additionally, because it is designed to mimic human brain function, WaveNet is capable of learning from extremely detailed — at least 16,000 samples per second — audio samples. The program statistically chooses which samples to use and pieces them together, producing raw audio. SIGN UP FOR OUR NEWSLETTER While most of the existing TTS systems also use…

Link to Full Article: Google DeepMind Artificial Intelligence Learns to Talk

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!