Novatech DL-TR1

This is the first in the series of purposely designed systems for training. However, most of the systems in this range, if they were fully populated using Tesla products, could be used also for inference, dependant on the complexity of your models. Utilising two Intel scaleable Xeon Gold CPU's offering 96 PCI-E lanes for increased bandwidth and lowers latency between CPU, memory, and GPU to increase data flow. Server grade ECC memory is used to reduce memory to CPU latency as well as increased reliability from redundant PSU's. There is no limitation in the scalability that the platform offers. The system is scalable up to a maximum of 4 GPU's offering 250-500 teraFLOPS of FP32 (based on Tesla V100 spec) compute performance allowing for bigger data sets to be trained. The system incorporates SSD storage to increase system speed as well as offing 4TB (default spec) of mechanical drive storage.

Novatech Deep Learning DL-TR1 Workstation
Available within 10 working days

Specification

GPU

2x NVIDIA Tesla V100 28 TFLOPS FP32

CPU

2x Intel Scalable Gold 5118 2.3Ghz 12 Cores - Each CPU capable of addressing 48 PCI-E Lanes

STORAGE

OS: Samsung PM863A 480GB SSD
Data: 2x 2TB
Fully configurable

Memory

192GB 2666Mhz 6 Channel (6x32GB) ECC REG

GPU Specification

GPU's Insatalled 2x NVIDIA Tesla V100
Maximum Number of GPUs 4x
CUDA Cores Per GPU 5120
Peak Half Precision FP16 Performance Per GPU 112 TFLOPS
Peak Single Precision FP32 Performance Per GPU 14 TFLOPS
Peak Double Precision FP64 Performance Per GPU 7 TFLOPS
GPU Memory Per GPU 16 GB GDDR5
Memory Interface Per GPU 4096-bit
Memory Bandwidth Per GPU 900 GB/s
System Interface PCI Express 3.0 x16
Maximum Power Consumption Per GPU 250 W

CPU (Two installed in default configuration)

Description Intel® Xeon® Gold 5118 Processor
# of Cores 12
# of Threads 24
Processor Base Frequency 2.30 GHz
Max Turbo Frequency 3.20 GHz
Cache 16.5 MB
TDP 105 W

Memory

Description 192GB (6x32GB) 2666MHz DDR4 ECC Rgistered
Maximun Capacity 2TB

Storage

Drive 1 1x 480GB 2.5" SSD 6Gb/s
Drive 2 2x Seagate Exos 2TB E-Class Nearline Enterprise SAS 2.5" Hard Drive

Chassis

Description 2U Rackmountable
Colour Black
Dimensions 440(W) x 88(H) x 800(D)mm
2.5" Hotswap Drive Bays x8 (x3 Occuipid in base configuration)
5.25" Drive Bays x0

Power Supply

Description 1+1 Redundant 1600W 80 PLUS Platinum Power Supply

Motherboard

CPU 2 x Socket P (LGA 3647)
Intel® Xeon® Scalable Processors Family (165W)
Intel® Xeon® Scalable Processors Family with OMNI-PATH FABRIC (supported on CPU1)(165W)
*Refer to support page for more information
UPI (10.4 GT/s)
Chipset Intel® C621
Memory 16 DIMM slots (6 Occupied in base configuration)
Up to 2TB RDIMM
Expansion Slots Total: 8 + 3
Full-length/Full-height
8 * PCI-E 3.0 x16 (4 at x16 Link or 8 at x8 Link)


Half-length/Low-profile
Rear:
1 * PCI-E 3.0 x24 (support with riser card)
(1 * PCI-E x16 (x16 Gen3 Link) and 1 * PCI-E x8 (x8 Gen3 Link)
Front:
1 * PCI-E x8 (internal HBA/RAID card)
LAN 1 x Dual Port Intel Ethernet Controller i350-AM2 + 1 x Mgmt LAN
USB Ports 2 x USB 3.0 port (Reaer IO)

Operating System

Description Ubuntu 16.04.3 LTS

NVIDIA TESLA V100

NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads.

INTEL® XEON® GOLD

Intel Xeon Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security. Advanced features have been weaved into the silicon. Open up synergy among compute, network, and storage. Synergy among compute, network, and storage is built in. Intel® Xeon® Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security.

Ready to go

All Novatech Deep Learning systems can come with Ubuntu 16.04 server LTS operating system, and the following additional platforms are available: CUDA, DIGITS, Caffe, Caffe2, CNTK, Pytorch, Tensorflow, Theano, and Torch.

If you require a Framework not listed, simply speak to our team and make them aware of your need.

Custom Engineering

We are ISO 9001:2008 certified and can manage your design, build, and configuration of compute, network, and storage solutions, specific to your needs and applications.

We have invested heavily into our in-house production facilities to ensure that all of our customer’s appropriate compliance, documentation and regulation needs are met

Request a price

All of our systems are built to order to meet our customers needs, and as such pricing varies depending on requirements.

Contact our dedicated Deep Learning team today for a tailored quotation.

Thank you, a member of the team will be in contact as soon as possible.

Sorry, there has been an error. Please contact [email protected] or call us on 023 9232 2500