Industry
2019 IBM Beacon Award For Novatech Power Systems
Novatech were selected by a panel of expert judges consisting of IBM executives, industry analysts, and industry expert at the IBM PartnerWorld Think conference in San Francisco
If your workflow needs more compute performance, multiple supercomputers can be linked together in a cluster. Deep learning and AI analysis can be straining for technology and requires vast amounts of power for your network. A cluster will be your bridge to the new avenues that you've been searching for. Prepare to take your network to the next level.
A GPU Cluster isn't as simple as boosting an application with multiple powerful GPUs. There are three key components that make up a GPU cluster: the host nodes, its GPUs and interconnects. With the GPUs powering the vast majority of the calculations, the host nodes and the network interconnect performance' need to match the GPUs to power an even-powered system. Full utilisation is enabled by matching the host memory with the amount of memory on the GPUs, simplifying the process that boost the full development of your applications.
Assume growth for up to 12 nodes
2 racks, 2 IB switches (36 ports)
19.2 kW per rack, can be split across the rack if required
Full bi-section bandwidth for each group of 6 nodes
2:1 oversubscription between groups of 6
Defines a GPU "POD"
Can be replicated for greater scales, eg. Large cluster configuration
6 racks, 6 nodes per rack
Larger IB director switch (216 ports) with capacity for more PODS via an unused port
Implements 4 GPU PODs
Distributed across 24 racks
Full bi-section bandwidth within POD, 2:1 between PODs
Training jobs ideally scheduled with a POD to minimise inter-POD traffic
Learn what a Multi-node GPU cluster could do for you. Enquire now.
Novatech were selected by a panel of expert judges consisting of IBM executives, industry analysts, and industry expert at the IBM PartnerWorld Think conference in San Francisco
Official Press release
Official Press release