|GPU's Installed||4x NVIDIA Tesla V100|
|Maximum Number of GPUs||4x|
|CUDA Cores Per GPU||1792|
|Peak Half Precision FP16 Performance Per GPU||112 TFLOPS|
|Peak Single Precision FP32 Performance Per GPU||14 TFLOPS|
|Peak Double Precision FP64 Performance Per GPU||7 TFLOPS|
|GPU Memory Per GPU||16 GB GDDR5|
|Memory Interface Per GPU||4096-bit|
|Memory Bandwidth Per GPU||900 GB/s|
|System Interface||PCI Express 3.0 x16|
|Cuda Cores Per GPU||5120|
|Maximum Power Consumption Per GPU||250 W|
|Description||Intel® Xeon® Gold 5118 Processor|
|# of Cores||12|
|# of Threads||24|
|Processor Base Frequency||2.30 GHz|
|Max Turbo Frequency||3.20 GHz|
|Description||192GB (6x32GB) 2666MHz DDR4 ECC Rgistered|
|Drive 1||1x Samsung PM863A 480GB 2.5" SSD 6Gb/s|
|Drive 2||2x Seagate Seagate Enterprise Performance 2TB HDD 2.5" 7200RPM|
|Dimensions||438(W) x 176(H) x 770(D)mm|
|2.5" Hotswap Drive Bays||x10 (x3 Occuipid in base configuration)|
|5.25" Drive Bays||x0|
|System Cooling Configuration/th>||(5+1) hot-swap 12cm fans|
|Description||3,200Watts (200-240Vac input) PFC / 80 plus platinum|
|CPU||2 x Socket P (LGA 3647)
Intel® Xeon® Scalable Processors Family (165W)
Intel® Xeon® Scalable Processors Family with OMNI-PATH FABRIC (supported on CPU1)(165W)
*Refer to support page for more information
UPI (10.4 GT/s)
24 DIMM slots (6 Occupied in base configuration)
Up to 3TB RDIMM
|Expansion Slots||(8) PCI-E Gen3 x16 slots
(2) PCI-E Gen3 x8 Tyan Mezzanine slots
(2) Mini-SAS HD (8-ports)
RAID 0/1/10/5 (Intel RSTe)
(2) 10GbE ports / (1) PHY
(2) USB3.0 ports (at front)
(1) DB-9 COM port (at front)
(1) D-Sub 15-pin port (at front)
(2) 10GbE ports, (1) GbE dedicated for IPMI
|Description||Ubuntu 16.04.3 LTS|
NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.
Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads.
Intel Xeon Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security. Advanced features have been weaved into the silicon. Open up synergy among compute, network, and storage. Synergy among compute, network, and storage is built in. Intel® Xeon® Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security.
All Novatech Deep Learning systems can come with Ubuntu 16.04 server LTS operating system, and the following additional platforms are available: CUDA, DIGITS, Caffe, Caffe2, CNTK, Pytorch, Tensorflow, Theano, and Torch.
If you require a Framework not listed, simply speak to our team and make them aware of your need.
We are ISO 9001:2008 certified and can manage your design, build, and configuration of compute, network, and storage solutions, specific to your needs and applications.
We have invested heavily into our in-house production facilities to ensure that all of our customer’s appropriate compliance, documentation and regulation needs are met
All of our systems are built to order to meet our customers needs, and as such pricing varies depending on requirements.
Contact our dedicated Deep Learning team today for a tailored quotation.