Fujitsu Makes GPUs More Efficient for Machine Learning, AI

New technology developed by Fujitsu Labs is designed to help neural networks scale and learn more quickly by streamlining GPU internal memory. Engineers at Fujitsu Laboratories are focusing on the memory within graphics cards to speed up the task of machine learning on neural networks.Fujitsu Labs this week announced new technology that officials said streamlines the internal memory of GPUs to meet the growing demand for greater scale of neural networks, which are among the foundations of the drive toward artificial intelligence (AI). Tests have shown that technology essentially doubles the capability of neural networks while reducing the amount of internal GPU memory used by more than 40 percent.GPUs are finding their way into a growing number of areas—such as high-performance computing (HPC)—where they can be used as accelerators for…

Link to Full Article: Fujitsu Makes GPUs More Efficient for Machine Learning, AI

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!