Deep Learning Devbox

Deep Learning Devbox

NVLink allows for significantly accelerated speeds and performances in your applications. With the ability to create direct connections between multiple GPUs, as well as offering flexibility in connections between CPUs and GPUs, NVLink accelerates data exchange rates ranging from 5 to 12 times faster than before with PCIe. The higher bandwidth between GPUs offers much better multi-GPU scalability. This increased flexibility, accessibility, and efficiency is sends your computational power into hyperdrive, driving the future in high performance computing.

At SabrePC, we want to help accelerate your research with our latest custom systems utilizing NVLink. With our fully optimized workstation and server solutions you can be sure to find the ideal system for your needs.

High-Speed Interconnect Accelerators
  Deep Learning Devbox
Tesla P100 for SXM2
Deep Learning Devbox
Quadro GP100
Double Precision Performance 5.3 TeraFLOPS 5.2 TeraFLOPS
Single Precision Performance 10.6 TeraFLOPS 10.3 TeraFLOPS
Half Precision Performance 21.2 TeraFLOPS 20.7 TeraFLOPS
CoWoS HBM2 Stacked Memory Capacity 16GB 16GB
CoWoS HBM2 Stacked Memory Bandwidth 720 GB/s 720 GB/s
Part Number 900-2H403-0000-000 VCQGP100-PB
Product Pages

Tesla P100 Product Page

NVIDIA Quadro NVLink Bridge
2-Way 2-Slot Kit (NVLINK-2W2S-KIT)

This kit enables a high performance connection between two GP100 GPUs. Includes the NVIDIA Quadro NVLink 2-Way Kit (2 Bridges).

learn more
Deep Learning Devbox

SabrePC NVLink Deep Learning Devbox
• (2x) NVIDIA Quadro GP100 (with NVLink bridge)
• (1x) Core i7-5930K 6-Core 3.5GHz 15M Processor
• (4x) 16GB DDR4 memory
• (1x) 256GB SSD for OS/Deep Learning Software Stacks

learn more
Deep Learning Devbox
  Compatible Applications
For Compute

SIMILIA Standard

ANSYS Mechanical CST Studio
ANSYS Fluent

MSC Nastran
For Visualization

ANSYS Workbench
ANSYS Design Modeler

Altair HyperWorks
Altair HyperMesh
Altair HyperView

Siemens NX SimCenter
Siemens FEMAP
Siemens NX CAE

SabrePC NVIDIA Tesla P100 Deep Learning 1U Server

• (4x) NVIDIA Tesla P100 SXM2 with NVLink Support
• (2x) Intel Xeon E5-2620 v4 2.1 GHz 20M Processor
• (8x) 16GB DDR4 RAM
• (1x) 256GB 2.5" SSD

learn more

SabrePC NVIDIA Tesla P100 Deep Learning 4U Server

• (8x) NVIDIA Tesla P100 SXM2 with NVLink Support
• (2x) Intel Xeon E5-2698 v4 2.20 GHz 50M Processor
• (16x) 32GB DDR4 RAM
• (4x) 2TB 2.5" SSD in RAID 0

learn more

Power8-NVLink Server

• Dual Power8-NVLink Processor, X-Bus up to 4.8GT/s, 190W TDP
• Up to 32 DDR4 ECC DIMMs 1600MHz
• 4 Nvidia SXM2 P-100 GPU
• Five expansion slots; 2 PCI-e x16 and 3 PCI-e x8

learn more
Deep Learning Devbox