Hi Alvaro. SLI is not used within CUDA programming, it is a technology related to the use of GPUs for graphics. Am I right in assuming you are hoping to speed up deep neural network training using multiple GPUs? If so, there are a number of deep learning frameworks that support multi-GPU training of a single model. This can happen because each GPU is individually addressable within a CUDA application, so workload can be distributed across them. For example, the version of Caffe that powers the NVIDIA DIGITS deep learning interface supports training a single model on multiple GPUs within a single compute node. Disclosure: I work for NVIDIA.

Link to Full Article: NVIDIA: CUDA works in SLI

Pin It on Pinterest

Share This

Join Our Newsletter

Sign up to our mailing list to receive the latest news and updates about homeAI.info and the Informed.AI Network of AI related websites which includes Events.AI, Neurons.AI, Awards.AI, and Vocation.AI

You have Successfully Subscribed!