Where do these RTX GPUs stand?
The NVIDIA RTX A6000 is one of the top of the line graphics cards for workstation applications. You may have heard of the bigger brother the NVIDIA A100 but that card is more geared towards Artificial Intelligence and Deep Learning applications. The A100 does not include a display output since it is more of an accelerator card as opposed to a “graphics card.”
On the other end, we have the GeForce RTX 3090, a beloved card by gaming and workstation enthusiasts for its immense computing power. The 3090 is also used in smaller-scale AI workloads and applications for individual data scientists. In this article we will be comparing the two; NVIDIA RTX A6000 vs NVIDIA GeForce RTX 3090.
GeForce RTX 3090
3-Slot (Triple Slot)
As you can see the RTX A6000 edges out to win every single spec line by a hairline. We will go over the bigger difference we see between these cards and why they matter: Memory, TDP, and Slot Size.
VRAM and ECC Memory
The RTX A6000 has twice the amount of VRAM. However, the 3090 has 24GBs GDDR6X while the A6000 has 48GB of ECC GDDR6. This means that the VRAM inside the 3090 is faster, but the A6000 has more VRAM. The possible reason why A6000 does not incorporate GDDR6X is the implementation of ECC.
ECC or Error Correction Code identifies and fixes the most common errors which could otherwise lead to data corruption or system crashes. ECC is almost never found in consumer-grade components such as memory and GPUs because file corruption is not as detrimental as in the professional workstation environment.
ECC is important in this case such that graphics in your files do not become corrupt on display and rendering, adding an additional layer of safety provision.
A big talking point when the RTX 3090 came out was the wattage. This GPU is power-hungry with a TDP of 350Ws. The A6000 comes in close at a TDP of 300W. In data center and workstation applications where multiple workstations are running within the office, every wattage counts towards the business’s electricity bill. Saving 14% of potential wattage across multiple machines starts to add up for an enterprise and business application.
50W might seem minuscule in a one or two workstation environment but when there are hundreds of GPUs inside servers, workstations, and HPC clusters, any energy saving goes a long way for large enterprise infrastructures.
A big advantage the RTX A6000 has over the RTX 3090 is slot size. The Dual Width slot size of the RTX A6000 means that having multiple NVLinked RTX A6000s in your workstation is extremely feasible as opposed to the triple-slot monster the 3090 is. Therefore, the scalability of the RTX A6000 plays a huge contributing factor.
For professional use cases where a single GPU just won’t cut it, high demanding workloads such as full movie rendering or AI deep learning requires more than one GPU working together. NVLink enables connection between multiple GPUs to pool the RAM memory together to make a super GPU.
In traditional towers, often there is only space for 8-Slots, enough space for a total of 4 RTX A6000 GPU, but only enough for just 2 GeForce RTX 3090s. These Professional RTX A6000 GPU are also used in dense compute servers where up 10 cards contribute to a large GPU cluster. The thinness is an advantage for scalability.
There are a lot of benchmarks that show that the RTX 3090 and RTX A6000 perform with similar times in Blender Tests, rendering benchmarks, and gaming FPS benchmarks. Some might even say, just get an RTX 3090 since they are cheaper!
However, that is far from the truth. If you are able to view GPU usage using a Linux NVIDIA SMI, you are able to see the Wattage, Memory Usage, and GPU utilization. When rendering smaller scenes, the speed of the Tensor Cores and CUDA cores
In smaller scenes, the GPU Memory uses only creeps up to 3GBs maybe 10GB; these two GPU will no doubt perform similarly between RTX 3090 and RTX A6000… they are, in fact, the same GPU die (GA102). In some cases, the RTX 3090 beats out the RTX A6000 due to the GDDR6X memory. However, when the scenes get larger and larger, such that you are rendering a full-on movie scene, that is when the RTX A6000 truly shines.
Benchmark tests in the Disney Pixar Moana Scene show the RTX A6000 crushing the rendering times at 26:39 seconds, whereas the RTX 3090 took 58:43. What happened? How did the RTX A6000 perform twice as fast?
The amount of GPU RAM played a huge contributing factor to render this complex scene. By having a lot of memory, the A6000 is able to cache large amounts of pre-rendered data inside the GPU Memory (48GB of GDDR6) whereas the RTX 3090 (24 GB GDDR6X) had become bottlenecked, needing to rely on system memory to help it out with.
What Should You Get?
To no surprise, the professional-grade GPU edged out ahead of its consumer-grade counterpart the RTX 3090.
By keeping memory close and local, the RTX A6000 was able to perform much more efficiently. However, that is not to say that you should always spring for the RTX A6000. If you are sure that high throughput workloads and rendering are far from what you are accomplishing within your workstation, the GeForce RTX 3090 is a great option for your workloads, especially if the system is of personal use.
However, the extra VRAM in the RTX A6000 can come in handy for more extreme calculations when working with more complex tasks such as AI inference, Blender Rendering, and more. Your workloads define which card you should slot into your workstation.
Your use case is what should determine what matters in your workstation. If you value the added space and scalability the RTX A6000 offers for extreme and high throughput workloads, then the extra price is warranted. However, if your workloads won’t exceed the average computer user and you’re just trying to game at 4K, then a single or dual RTX 3090 is your best bet.
If you are unsure about what you need, feel free to contact SabrePC today and we would be more than happy to help you find the component, or system you need!