FPGAs Speed Machine Learning at SC16 Intel Discovery Zone

In this video from SC16, Intel demonstrates how Altera FPGAs can accelerate Machine Learning applications with greater power efficiency. The demo was put together using OpenCL design tools and then compiled to FPGA. From an end-user perspective, they tied it together using Intel MKL-DNN with CAFFE on top of that. This week, Intel announced the DLIA Deep Learning Inference Accelerator that brings the whole solution together in a box. “Today, one of the most popular machine learning methods is using neural networks for object detection and recognition. Neural networks are modelled after the brain’s interconnected neurons and use a variety of layers that extract lower levels of detail for each layer in the network. The FPGA implements these layers very efficiently because the FPGA has the ability to…


Link to Full Article: FPGAs Speed Machine Learning at SC16 Intel Discovery Zone

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!