Novatech DL-DT1

This system is designed as a compact 1U inference system for deployment. Utilising two Intel scaleable Xeon CPU's, NVLink offers up to 5-10x the performance of a traditional PCI-E bus, increasing communications bandwidth between GPU to GPU and GPU to CPU. The results are the elimination of transfer bottlenecks and increased GPU compute performance via NVLink. Server grade ECC memory is used to reduce memory to CPU latency as well as increased reliability from redundant PSU's. The system is scalable up to a maximum of 4 GPU's (based on V100 offering up to 125Terraflops per unit) compute performance, allowing for real-time inference deployment over large fast networks. This system incorporates 2x 10GbE ports for improved performance for multiple nodes, and other resources on the network, in particular, storage which can often be a limiting factor.

Novatech Deep Learning DL-DT1 Workstation
Available within 10 working days



2x NVIDIA Tesla V100 28 TFLOPS FP32


2x Intel Scalable Gold 5118 2.3Ghz 12 Cores - Each CPU capable of addressing 48 PCI-E Lanes


OS: Samsung PM863A 480GB NVME
Data: 2x 2TB
Fully configurable


192GB 2666Mhz 6 Channel (6x32GB) ECC REG

GPU Specification

GPU's Installed 2x NVIDIA Tesla V100 SXM2 NVLINK2
Maximum Number of GPUs 4x
CUDA Cores Per GPU 5120
Peak Half Precision FP16 Performance Per GPU 125 TFLOPS
Peak Single Precision FP32 Performance Per GPU 15.7 TFLOPS
Peak Double Precision FP64 Performance Per GPU 7.8 TFLOPS
GPU Memory Per GPU 16 GB GDDR5
Memory Interface Per GPU 4096-bit
Memory Bandwidth Per GPU 900 GB/s
System Interface SXM2
Maximum Power Consumption Per GPU 300 W

CPU (Two installed in default configuration)

Description Intel® Xeon® Gold 5118 Processor
# of Cores 12
# of Threads 24
Processor Base Frequency 2.30 GHz
Max Turbo Frequency 3.20 GHz
Cache 16.5 MB
TDP 105 W


Description 384GB (6x64GB) 2666MHz DDR4 ECC Registered
Maximum Capacity 3TB


Drive 1 1x Samsung PM863A 480GB 2.5" SSD 6Gb/s
Drive 2 2x Seagate Seagate Enterprise Performance 2TB HDD 2.5" 7200RPM


Description 1U Rackmountable
Colour Black
Dimensions 437(W) x 43(H) x 997(D)mm
2.5" Hotswap Drive Bays 2x Hotswap (x2 Occuipid in base configuration)
2x Fixed internal (Occupued in base configurtation.
5.25" Drive Bays x0
System Cooling Configuration 7 Heavy duty 4cm counter-rotating fans with air shroud & optimal fan speed control

Power Supply

Description 2000W Redundant Power Supplies with PMBus 80 plus Titanium


CPU 2 x Socket P (LGA 3647)
Intel® Xeon® Scalable Processors Family (165W)
Intel® Xeon® Scalable Processors Family with OMNI-PATH FABRIC (supported on CPU1)(165W)
*Refer to support page for more information
UPI (10.4 GT/s)
Chipset Intel® C621
Memory 12 DIMM slots (6 Occupied in base configuration)
Up to 1.5TB RDIMM
Expansion Slots 4 PCI-E 3.0 x16 slots
Storage Controller
Intel C621
6.0 Gb/s
RAID 0/1/10/5
LAN Controller

Intel X540
(2) 10GbE ports / (1) Dedicated IPMI LAN Port
I/O Ports USB
(2) USB3.0 ports (at rear)
(1) Fast UART 16550 header (internal)
(1) D-Sub 15-pin port (at rear) RJ-45
(2) 10GbE ports, (1) GbE dedicated for IPMI

Operating System

Description Ubuntu 16.04.3 LTS


From recognising speech to training virtual personal assistants and teaching autonomous cars to drive, data scientists are taking on increasingly complex challenges with AI. Solving these kinds of problems requires training deep learning models that are exponentially growing in complexity, in a practical amount of time.

With 640 Tensor Cores, Tesla V100 is the world’s first GPU to break the 100 teraflops (TFLOPS) barrier of deep learning performance.

The GPUs are plugged into the motherboard via a SXM2 connection which offers between 5 and 10 times the speed of traditional PCI-E 3.0, decreasing the latency and increasing the bandwidth from GPU to GPU as well as GPU to CPU.

NVLink versus PCI-E

Unleash ultra-fast communication between the GPU and CPU with NVIDIA® NVLink, a high-bandwidth, energy-efficient interconnect that allows data sharing at rates 5 to 10 times faster than the traditional PCIe Gen3 interconnect, resulting in dramatic speed-ups in application performance that create a new breed of high-density, flexible servers for accelerated computing.


Intel Xeon Scalable processors optimise interconnectivity with a focus on speed without compromising data security. Advanced features have been weaved into the silicon. Open up synergy among compute, network, and storage. Synergy among compute, network, and storage is built in. Intel® Xeon® Scalable processors optimiSe interconnectivity with a focus on speed without compromising data security.

Ready to go

All Novatech Deep Learning systems can come with Ubuntu 16.04 server LTS operating system, and the following additional platforms are available: CUDA, DIGITS, Caffe, Caffe2, CNTK, Pytorch, Tensorflow, Theano, and Torch.

If you require a Framework not listed, simply speak to our team and make them aware of your need.

Custom Engineering

We are ISO 9001:2008 certified and can manage your design, build, and configuration of compute, network, and storage solutions, specific to your needs and applications.

We have invested heavily into our in-house production facilities to ensure that all of our customer’s appropriate compliance, documentation and regulation needs are met

Request a price

All of our systems are built to order to meet our customers needs, and as such pricing varies depending on requirements.

Contact our dedicated Deep Learning team today for a tailored quotation.

Thank you, a member of the team will be in contact as soon as possible.

Sorry, there has been an error. Please contact [email protected] or call us on 023 9232 2500