Novatech DL-TR3

Designed specifically for academia (research usage) as stipulated by NVIDIA, powered by Intel's Broadwell-E platform, using a single root complex the system utilises two CPU's, only one of which is used to address the GPU's on the PCI-E Bus. This is achieved via PCI-E switching, lowering the GPU to GPU latency. Supporting up to 10 GPU's in one system offers significant value, increasing ROI. Server grade ECC memory is used to reduce memory to CPU latency as well as increased reliability from redundant PSU's. There is no limitation in the scalability that the platform offers. The system is scalable up to a maximum of 10 GPU's offering 113 teraFLOPS of FP32 (based on GTX 1080TI spec) compute performance allowing for bigger data sets to be trained. The system incorporates SSD storage to increase system speed as well as offing 4TB (default spec) of mechanical drive storage.

Novatech Deep Learning DL-TR3 Workstation
Available within 10 working days



2x NVIDIA V100 PCI-E Supports up to 10 GPUs


2x Intel E5-2628L V4 1.9Ghz 12 Core Low Power Consumption (75W) - Each CPU capable of addressing 40 PCI-E Lanes


OS: Samsung PM863A 480GB SSD
Data: 2x 2TB
Fully configurable


256GB 2400Mhz Quad Channel (4x64GB) ECC REG

GPU Specification

GPU's Insatalled 2x NVIDIA V100 PCI-E
Maximum Number of GPUs 10x
CUDA Cores Per GPU 1792
Peak Half Precision FP16 Performance Per GPU 112 TFLOPS
Peak Single Precision FP32 Performance Per GPU 14 TFLOPS
Peak Double Precision FP64 Performance Per GPU 7 TFLOPS
GPU Memory Per GPU 16 GB GDDR5
Memory Interface Per GPU 4096-bit
Memory Bandwidth Per GPU 900 GB/s
System Interface PCI Express 3.0 x16
Maximum Power Consumption Per GPU 250 W

CPU (Two in Default configuration)

Description Intel® Xeon® E5-2628L v4 Processor
# of Cores 12
# of Threads 24
Processor Base Frequency 1.90 GHz
Max Turbo Frequency 2.40 GHz
Cache 30 MB SmartCache
TDP 75 W


Description 256GB (4x64GB) 2400MHz DDR4 ECC Rgistered


Drive 1 1x 480GB 2.5" SSD 6Gb/s
Drive 2 2x Seagate Exos 2TB E-Class Nearline Enterprise SAS 2.5" Hard Drive


Description 4.5U Rackmountable
Colour Black
Dimensions 437(W) x 178(H) x 737(D)mm
2.5" Hotswap Drive Bays x24 (x3 Occuipid in base configuration)
5.25" Drive Bays x0
Cooling 8 Hot-swap 92mm cooling fans

Power Supply

Description 2000W Redundant 80 PLUS Titanium Power Supplies with PMBus


CPU Intel® Xeon® processor E5-2600 v4 / v3 family (up to 160W TDP)
Dual Socket R3 (LGA 2011)
Chipset Intel® C612
Memory 26 DIMM slots (4 Occupied in bcase configuration)
Expansion Slots 11 PCI-E 3.0 x16 (FH, FL) slots
1 PCI-E 3.0 x8 (in x16) slots
Single Root Complex
Storage 10 SATA3 (6Gbps) ports
LAN Intel® X550 Dual Port 10GBase-T conroller
2 RJ45 10GBase-T ports
1 RJ45 Dedicated IPMI LAN port
USB Ports 4 USB 3.0 ports
Description Ubuntu 16.04.3 LTS


NVIDIA® Tesla® V100 is the world’s most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta, the latest GPU architecture, Tesla V100 offers the performance of 100 CPUs in a single GPU—enabling data scientists, researchers, and engineers to tackle challenges that were once impossible.

Tesla V100 is engineered for the convergence of AI and HPC. It offers a platform for HPC systems to excel at both computational science for scientific simulation and data science for finding insights in data. By pairing NVIDIA CUDA® cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU-only servers for both traditional HPC and AI workloads.

Single root vs Dual root complex

A root complex device works within a PCI Express system to connect the PCI Express switch device with the subsystem memory and processor. A root complex can be worked in with the processor, working in its place to materialise transaction requests. Servers with single root complex provide the highest number of GPUs within a single server and enables full connectivity between the GPUs. Large amounts of traditional or high-speed storage can also be built into the server.

The root port has the added requirement of cataloguing the devices on the network when it boots up. It seeks the endpoints, calculates how much address space is required, and allocating the device(s) a space in the selected address of the system. Although it is customary to have a single root complex in a system, there is potential for dual root complexes with many targets.

Configured for scaling with GPU-Direct RDMA

As shown in the diagram below, the single root complex allows for exceptional performance within the server and across a cluster of InfiniBand-connected systems. Utilizing GPU-Direct RDMA, any GPU in the cluster may directly access the data of any other GPU (remote memory access).

4U server with Single Root Complex (8-GPU config for RDMA)

Configured for density with 10 GPUs per server

For projects which require many GPUs per system, the DL-TR3 may be configured with ten GPUs on a single PCI-Express root complex. As shown in the diagram below, one additional PCI-Express card may also be added (although this card is not on the same PCI-Express root complex).

4U server with Single Root Complex (10-GPU config for density)

Ready to go

All Novatech Deep Learning systems can come with Ubuntu 16.04 server LTS operating system, and the following additional platforms are available: CUDA, DIGITS, Caffe, Caffe2, CNTK, Pytorch, Tensorflow, Theano, and Torch.

If you require a Framework not listed, simply speak to our team and make them aware of your need.

Custom Engineering

We are ISO 9001:2008 certified and can manage your design, build, and configuration of compute, network, and storage solutions, specific to your needs and applications.

We have invested heavily into our in-house production facilities to ensure that all of our customer’s appropriate compliance, documentation and regulation needs are met

Request a price

All of our systems are built to order to meet our customers needs, and as such pricing varies depending on requirements.

Contact our dedicated Deep Learning team today for a tailored quotation.

Thank you, a member of the team will be in contact as soon as possible.

Sorry, there has been an error. Please contact [email protected] or call us on 023 9232 2500