NVIDIA Grace Hopper GH200 Superchip

check

Innovative 1:1 Compute

NVIDIA GH200 Grace Hopper introduces a new form of hardware; an interconnected 1-to-1 CPU-GPU processor and accelerator for performance and efficiency.

switch

Unparalleled Bandwidth

NVIDIA NVLink-C2C interconnect enables 900GB/s bidirectional bandwidth, minimizing the data transfer bottlenecks between the Grace CPU and Hopper GPU.

IT

Immense Scalability

NVIDIA NVLink Switch System scales nodes featuring GH200 by connecting 256 NVIDIA Grace Hopper Superchips to build a seamless, high bandwidth system.

Accelerated Servers featuring NVIDIA Grace Hopper GH200

SabreEDGE-1u-14

Single NVIDIA Grace Hopper 1U Server (Air Cooled)


1x NVIDIA Grace Hopper GH200
Shared 480GB LPDDR5X (CPU)
Shared 96GB HBM3 (GPU)
3x Double-Wide Accelerators on PCIe 5.0x16
8x E1.S NVMe SSD Hotswap


Learn More
SabreEDGE-1u-14

Single NVIDIA Grace Hopper 1U Server (Liquid Cooled)


1x NVIDIA Grace Hopper GH200
Shared 480GB LPDDR5X (CPU)
Shared 96GB HBM3 (GPU)
3x Double-Wide Accelerators on PCIe 5.0x16
8x E1.S NVMe SSD Hotswap


Learn More
SabreEDGE-1u-13

2 Node Single Grace Hopper 1U Server (Liquid Cooled)


2x NVIDIA Grace Hopper GH200 (1x per node)
Shared 960GB LPDDR5X (CPU) (480GB per node)
Shared 192GB HBM3 (GPU) (96GB per node)
4x Double-Wide Accelerators PCIe 5.0x16 (2x per node)
8x E1.S NVMe SSD Hotswap (4x per node)


Learn More

Servers featuring Grace, NVIDIA's ARM based CPU

SabreEDGE-1u-13

2x Node NVIDIA Grace CPU 1U Server (Liquid Cooled)


2x NVIDIA Grace CPU (1 per node)
960GB LPDDR5X Memory (480GB per node)
4x Double-Wide accelerators PCIe 5.0 x16 (2x per node)
8x E1.S NVMe SSD Hotswap (4x per node)


Buy Now
SabreEdge-2U-9

Single NVIDIA Grace CPU Quad GPU 2U Server


1x NVIDIA Grace CPU
480GB LPDDR5X Memory
4x Double-Wide accelerators PCIe 5.0 x16
8x E1.S NVMe SSD Hotswap


Learn More

Powering Next-Gen AI

To facilitate new discoveries, NVIDIA develops a new computing platform to accelerate and power the next generation of AI. NVIDIA Grace Hopper Superchip enables a 1-to-1 CPU and GPU integration connected via NVLink to deliver a uniquely balanced, powerful, and efficient processor and accelerator suitable for AI training, simulation, and inferencing. With 7X the amount of fast-access memory with 7X the bandwidth NVIDIA GH200 speeds up AI jobs to new heights.

NVIDIA-GH200-Grace-hopper.png

Two Groundbreaking Advancements in One

NVIDIA GH200 Superchip fuses the Grace CPU and Hopper GPU architectures using NVIDIA NVLink-C2C to deliver a CPU & GPU memory coherent, high-bandwidth, and low-latency superchip interconnect.

  • NVIDIA Grace CPU is the first NVIDIA data center CPU featuring 72 ARM Neoverse V2 CPU cores and 480GB of LPDDR5 memory, Grace delivers 53% more bandwidth at one-eighth the power per GB/s for optimal energy efficiency and bandwidth.
  • NVIDIA Hopper GPU utilizes the groundbreaking Transformer Engine capable of mixed FP8 and FP16 precision formats. With mixed precision, Hopper intelligently manages accuracy while gaining dramatic AI performance, 9X faster training, and 30x faster inferencing. 

Have any Questions? We got answers.

Our experts are here to help you along the way in finding the right solution for you.