Communication is key in our age of technology. Communication amongst your technologies is even more important. Since NVIDIA’s Titan, a trend has emerged toward heterogeneous node configurations with larger ratios of GPU accelerators per CPU socket, with two or more GPUs per CPU becoming increasingly common as developers continue to expose and leverage the available parallelism in their applications. Although each of the new DoE systems is unique, they share the same fundamental multi-GPU node architecture.

Linking your network

While multi-GPU applications provide a vehicle for scaling single node performance, they can be constrained by interconnect performance between the GPUs. Developers must overlap data transfers with computation or carefully orchestrate GPU accesses over PCIe interconnect to maximize performance.  However, as GPUs get faster and GPU-to-CPU ratios climb, a higher performance node integration interconnect is warranted.  Enter NVLink.

Nvlink was designed as a solution to the challenges that exascale computing created. With an energy-efficient design and high-bandwidth, this sublime interconnect is the next step that you’ve been looking for in accelerating your GPU, allowing for flash-quick communication between your CPU and GPU hardware, as well as connections between the GPU’s themselves. NVLink brings data sharing to a new level, up to 10 times faster than any traditional PCIe interconnect.

The result is a dynamic technology that speeds up applications and their performance, producing a new species of boosted, flexible servers for efficient, ultra-fast computing. 

Find below a fantastic, informative White Paper with detailed information about NVLink and its multiple useful properties.

Share this