|GPU's Installed||2x NVIDIA Tesla V100 SXM2 NVLINK2|
|Maximum Number of GPUs||4x|
|CUDA Cores Per GPU||5120|
|Peak Half Precision FP16 Performance Per GPU||125 TFLOPS|
|Peak Single Precision FP32 Performance Per GPU||15.7 TFLOPS|
|Peak Double Precision FP64 Performance Per GPU||7.8 TFLOPS|
|GPU Memory Per GPU||16 GB GDDR5|
|Memory Interface Per GPU||4096-bit|
|Memory Bandwidth Per GPU||900 GB/s|
|Maximum Power Consumption Per GPU||300 W|
|Description||Intel® Xeon® Gold 5118 Processor|
|# of Cores||12|
|# of Threads||24|
|Processor Base Frequency||2.30 GHz|
|Max Turbo Frequency||3.20 GHz|
|Description||384GB (6x64GB) 2666MHz DDR4 ECC Registered|
|Drive 1||1x Samsung PM863A 480GB 2.5" SSD 6Gb/s|
|Drive 2||2x Seagate Seagate Enterprise Performance 2TB HDD 2.5" 7200RPM|
|Dimensions||437(W) x 43(H) x 997(D)mm|
|2.5" Hotswap Drive Bays||2x Hotswap (x2 Occuipid in base configuration)
2x Fixed internal (Occupued in base configurtation.
|5.25" Drive Bays||x0|
|System Cooling Configuration||7 Heavy duty 4cm counter-rotating fans with air shroud & optimal fan speed control|
|Description||2000W Redundant Power Supplies with PMBus 80 plus Titanium|
|CPU||2 x Socket P (LGA 3647)
Intel® Xeon® Scalable Processors Family (165W)
Intel® Xeon® Scalable Processors Family with OMNI-PATH FABRIC (supported on CPU1)(165W)
*Refer to support page for more information
UPI (10.4 GT/s)
12 DIMM slots (6 Occupied in base configuration)
Up to 1.5TB RDIMM
|Expansion Slots||4 PCI-E 3.0 x16 slots|
(2) 10GbE ports / (1) Dedicated IPMI LAN Port
(2) USB3.0 ports (at rear)
(1) Fast UART 16550 header (internal)
(1) D-Sub 15-pin port (at rear) RJ-45
(2) 10GbE ports, (1) GbE dedicated for IPMI
|Description||Ubuntu 16.04.3 LTS|
From recognising speech to training virtual personal assistants and teaching autonomous cars to drive, data scientists are taking on increasingly complex challenges with AI. Solving these kinds of problems requires training deep learning models that are exponentially growing in complexity, in a practical amount of time.
With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance.
The GPUs are plugged into the motherboard via a SXM2 connection which offers between 5 and 10 times the speed of traditional PCI-E 3.0, decreasing the latency and increasing the bandwidth from GPU to GPU as well as GPU to CPU.
Unleash ultra-fast communication between the GPU and CPU with NVIDIA® NVLink, a high-bandwidth, energy-efficient interconnect that allows data sharing at rates 5 to 10 times faster than the traditional PCIe Gen3 interconnect, resulting in dramatic speed-ups in application performance that create a new breed of high-density, flexible servers for accelerated computing.
Intel Xeon Scalable processors optimise interconnectivity with a focus on speed without compromising data security. Advanced features have been weaved into the silicon. Open up synergy among compute, network, and storage. Synergy among compute, network, and storage is built in. Intel® Xeon® Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security.
All Novatech Deep Learning systems can come with Ubuntu 16.04 server LTS operating system, and the following additional platforms are available: CUDA, DIGITS, Caffe, Caffe2, CNTK, Pytorch, Tensorflow, Theano, and Torch.
If you require a Framework not listed, simply speak to our team and make them aware of your need.
We are ISO 9001:2008 certified and can manage your design, build, and configuration of compute, network, and storage solutions, specific to your needs and applications.
We have invested heavily into our in-house production facilities to ensure that all of our customer’s appropriate compliance, documentation and regulation needs are met
All of our systems are built to order to meet our customers needs, and as such pricing varies depending on requirements.
Contact our dedicated Deep Learning team today for a tailored quotation.