PNY NVH100TCGPU-KIT NVIDIA H100 Graphic Card - 80 GB HBM3 - 2 Slot - Passive - PCIe 5.0 x16

MPN: NVH100TCGPU-KIT
PNY NVH100TCGPU-KIT NVIDIA H100 Graphic Card - 80 GB HBM3 - 2 Slot - Passive - PCIe 5.0 x16
Out of Stock
Highlights
  • Standard Memory: 80 GB
  • Cooler Type: Passive Cooler
  • Product Type: Graphic Card
  • Condition: New
$34,335.78
Please log in to add an item to your wishlist.
Non-cancelable and non-returnable
B2B pricing options available.

SabrePC B2B Account Services

Save instantly and shop with assurance knowing that you have a dedicated account team a phone call or email away to help answer any of your questions with a B2B account.

  • Business-Only Pricing
  • Personalized Quotes
  • Fast Delivery
  • Products and Support
Need Help? Let's talk about it.
Please log in to add an item to your wishlist.
PNY NVH100TCGPU-KIT NVIDIA H100 Graphic Card - 80 GB HBM3 - 2 Slot - Passive - PCIe 5.0 x16
MPN: NVH100TCGPU-KIT

$34,335.78

Non-cancelable and non-returnable

PNY NVH100TCGPU-KIT NVIDIA H100 Graphic Card - 80 GB HBM3 - 2 Slot - Passive - PCIe 5.0 x16

Out of Stock
Highlights
  • Standard Memory: 80 GB
  • Cooler Type: Passive Cooler
  • Product Type: Graphic Card
  • Condition: New

NVIDIA H100 PCIe | Unprecedented Performance, Scalability, and Security for Every Data Center

The NVIDIA H100 Tensor Core GPU enables an order-of-magnitude leap for large-scale AI and HPC with unprecedented performance, scalability, and security for every data center and includes the NVIDIA AI Enterprise software suite to streamline AI development and deployment. H100 accelerates exascale scale workloads with a dedicated Transformer Engine for trillion parameter language models. For small jobs, H100 can be partitioned down to right-sized Multi-Instance GPU (MIG) partitions. With Hopper Confidential Computing, this scalable compute power can secure sensitive applications on shared data center infrastructure. The inclusion of NVIDIA AI Enterprise with H100 PCIe purchases reduces time to development and simplifies deployment of AI workloads, and makes H100 the most powerful end-to-end AI and HPC data center platform.

The NVIDIA Hopper architecture delivers unprecedented performance, scalability and security to every data center. Hopper builds upon prior generations from new compute core capabilities, such as the Transformer Engine, to faster networking to power the data center with an order of magnitude speedup over the prior generation. NVIDIA NVLink supports ultra-high bandwidth and extremely low latency between two H100 boards, and supports memory pooling and performance scaling (application support required). Second-generation MIG securely partitions the GPU into isolated right-size instances to maximize QoS (quality of service) for 7x more secured tenants. The inclusion of NVIDIA AI Enterprise (exclusive to the H100 PCIe), a software suite that optimizes the development and deployment of accelerated AI workflows, maximizes performance through these new H100 architectural innovations. These technology breakthroughs fuel the H100 Tensor Core GPU - the world's most advanced GPU ever built.