Facebook speeds up machine learning with Nvidia Tesla power

by Bruno Ferreira — 4:53 PM on December 11, 2015 Neural networks, once considered a computing-resource-intensive approach to machine learning, have become a mainstay in the field over the last decade or so. They do still require an enormous amount of computing power to run, though, and plain old CPUs aren’t enough for the task, particularly in deep learning applications. That’s where GPU compute comes in. Facebook is the latest player in this field to take advantage of Nvidia’s Tesla compute cards—more specifically, the Tesla M40. The social media company has recently announced “Big Sur,” a system designed specifically for training neural networks. Big Sur is built on an Open Rack V2 platform, and each box can take up to eight compute cards. Facebook claims its design is more versatile…

Link to Full Article: Facebook speeds up machine learning with Nvidia Tesla power

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!