Nvidia JetPack 2.3 Doubles Deep Learning And Compilation Performance

The Jetson TX1 suite of development tools and libraries reached version 2.3, which brings a doubling of performance for deep learning tasks. Nvidia’s Jetson TX1 is the company’s highest-performance embedded chip for deep learning. The chip can run intelligent algorithms that can solve problems associated with public safety, smart cities, manufacturing, disaster relief, agriculture, transportation and infrastructure inspection. TensorRT The new JetPack 2.3 includes TensorRT, previously known as the GPU Inference Engine. The TensorRT deep learning inference engine doubles the performance of applications such as image classification, segmentation and object detection (compared to Nvidia’s previous implementation of cuDNN). Nvidia said that developers could now deploy Jetson TX1-powered real-time neural networks. cuDNN 5.1 The new software suite includes the cuDNN 5.1 CUDA-accelerated library for deep learning that offers developers highly optimized…

Link to Full Article: Nvidia JetPack 2.3 Doubles Deep Learning And Compilation Performance

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!