NVIDIA 900-2G503-0000-000 Tesla V100 SXM2 Passive Cooling 16 GB HBM2

MPN: 900-2G503-0000-000
NVIDIA 900-2G503-0000-000 Tesla V100 SXM2 Passive Cooling 16 GB HBM2
Out of Stock
Highlights
  • Standard Memory: 16 GB
  • Cooler Type: Passive Cooler
  • Product Type: Graphic Card
  • Condition: New
$7,818.12
Please log in to add an item to your wishlist.
Non-cancelable and non-returnable
B2B pricing options available.

SabrePC B2B Account Services

Save instantly and shop with assurance knowing that you have a dedicated account team a phone call or email away to help answer any of your questions with a B2B account.

  • Business-Only Pricing
  • Personalized Quotes
  • Fast Delivery
  • Products and Support
Need Help? Let's talk about it.
Please log in to add an item to your wishlist.
NVIDIA 900-2G503-0000-000 Tesla V100 SXM2 Passive Cooling 16 GB HBM2
MPN: 900-2G503-0000-000

$7,818.12

Non-cancelable and non-returnable

NVIDIA 900-2G503-0000-000 Tesla V100 SXM2 Passive Cooling 16 GB HBM2

Out of Stock
Highlights
  • Standard Memory: 16 GB
  • Cooler Type: Passive Cooler
  • Product Type: Graphic Card
  • Condition: New
The Most Advanced Data Center GPU Ever Built.
NVIDIA® Tesla® V100 is the world's most advanced data center GPU ever built to accelerate AI, HPC, and graphics. Powered by NVIDIA Volta™, the latest GPU architecture, Tesla V100 offers the performance of up to 100 CPUs in a single GPU-enabling data scientists, researchers, and engineers to tackle challenges that were once thought impossible.

GROUNDBREAKING INNOVATIONS

VOLTA ARCHITECTURE
By pairing CUDA Cores and Tensor Cores within a unified architecture, a single server with Tesla V100 GPUs can replace hundreds of commodity CPU servers for traditional HPC and Deep Learning.

TENSOR CORE
Equipped with 640 Tensor Cores, Tesla V100 delivers 120 TeraFLOPS of deep learning performance. That's 12X Tensor FLOPS for DL Training, and 6X Tensor FLOPS for DL Inference when compared to NVIDIA Pascal™ GPUs.

NEXT GENERATION NVLINK
NVIDIA NVLink in Tesla V100 delivers 2X higher throughput compared to the previous generation. Up to eight Tesla V100 accelerators can be interconnected at up to 300 GB/s to unleash the highest application performance possible on a single server.