Novatech DL-TR2

This system is powered by Intel's latest scaleable Xeon platform, offering 96 PCI-E lanes. utilizing Intel's Ultrapath interconnect (UPI), this system offers the highest level of performance available whilst still using a PCI-E Bus for GPU connectivity. Our default specification with 4x V100 GPU's can deliver up to 500 Terraflops FP16. This offers outstanding training performance and if configured with 8 GPUs can also be used as a deployment unit. Server grade ECC memory is used to reduce memory to CPU latency, as well as increased reliability from redundant PSU's. The system incorporates SSD storage to increase system speed as well as offing 4TB (default spec) of mechanical drive storage.

Proof of Concept available
Sale or Return available
Available within 10 working days

Specification

GPU

NVIDIA Tesla V100 28 TFLOPS FP32

CPU

2x Intel Scalable Gold 5118 12 Core 2.3Ghz Each CPU capable of addressing 48 PCI-E Lanes each

STORAGE

OS: Samsung PM863A 480GB NVME
Data: 2x 2TB
Fully configurable

Memory

192GB 2666Mhz 6 Channel RAM (6x32GB) ECC REG

GPU Specification

GPU's Installed 4x NVIDIA Tesla V100
Maximum Number of GPUs 4x
CUDA Cores Per GPU 1792
Peak Half Precision FP16 Performance Per GPU 112 TFLOPS
Peak Single Precision FP32 Performance Per GPU 14 TFLOPS
Peak Double Precision FP64 Performance Per GPU 7 TFLOPS
GPU Memory Per GPU 16 GB GDDR5
Memory Interface Per GPU 4096-bit
Memory Bandwidth Per GPU 900 GB/s
System Interface PCI Express 3.0 x16
Cuda Cores Per GPU 5120
Maximum Power Consumption Per GPU 250 W

CPU (Two installed in default configuration)

Description Intel® Xeon® Gold 5118 Processor
# of Cores 12
# of Threads 24
Processor Base Frequency 2.30 GHz
Max Turbo Frequency 3.20 GHz
Cache 16.5 MB
TDP 105 W


Memory

Description 192GB (6x32GB) 2666MHz DDR4 ECC Rgistered
Maximun Capacity 3TB


Storage

Drive 1 1x Samsung PM863A 480GB 2.5" SSD 6Gb/s
Drive 2 2x Seagate Seagate Enterprise Performance 2TB HDD 2.5" 7200RPM

Chassis

Description 4U Rackmountable
Colour Black
Dimensions 438(W) x 176(H) x 770(D)mm
2.5" Hotswap Drive Bays x10 (x3 Occuipid in base configuration)
5.25" Drive Bays x0
System Cooling Configuration/th> (5+1) hot-swap 12cm fans

Power Supply

Description 3,200Watts (200-240Vac input) PFC / 80 plus platinum
Redundancy 2+1

Motherboard


CPU 2 x Socket P (LGA 3647)
Intel® Xeon® Scalable Processors Family (165W)
Intel® Xeon® Scalable Processors Family with OMNI-PATH FABRIC (supported on CPU1)(165W)
*Refer to support page for more information
UPI (10.4 GT/s)
Chipset Intel® C621
Memory 24 DIMM slots (6 Occupied in base configuration)
Up to 3TB RDIMM
Expansion Slots (8) PCI-E Gen3 x16 slots
(2) PCI-E Gen3 x8 Tyan Mezzanine slots
Storage Controller
Intel C621
Speed
6.0 Gb/s
Connector
(2) Mini-SAS HD (8-ports)
Raid
RAID 0/1/10/5 (Intel RSTe)
LAN Controller

Intel X550-AT2
(2) 10GbE ports / (1) PHY
PHY
Realtek RTL8211E
I/O Ports USB
(2) USB3.0 ports (at front)
COM
(1) DB-9 COM port (at front)
VGA
(1) D-Sub 15-pin port (at front)
RJ-45
(2) 10GbE ports, (1) GbE dedicated for IPMI

Operating System

Description Ubuntu 16.04.3 LTS

NVIDIA TESLA V100

NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads.

INTEL® XEON® GOLD

Intel Xeon Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security. Advanced features have been weaved into the silicon. Open up synergy among compute, network, and storage. Synergy among compute, network, and storage is built in. Intel® Xeon® Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security.

Ready to go

All Novatech Deep Learning systems can come with Ubuntu 16.04 server LTS operating system, and the following additional platforms are available: CUDA, DIGITS, Caffe, Caffe2, CNTK, Pytorch, Tensorflow, Theano, and Torch.

If you require a Framework not listed, simply speak to our team and make them aware of your need.

Custom Engineering

We are ISO 9001:2008 certified and can manage your design, build, and configuration of compute, network, and storage solutions, specific to your needs and applications.

We have invested heavily into our in-house production facilities to ensure that all of our customer’s appropriate compliance, documentation and regulation needs are met

Request a price

All of our systems are built to order to meet our customers needs, and as such pricing varies depending on requirements.

Contact our dedicated Deep Learning team today for a tailored quotation.

Thank you, a member of the team will be in contact as soon as possible.

Sorry, there has been an error. Please contact [email protected] or call us on 023 9232 2500