Battle of the GPUs: NVIDIA vs. AMD
NVIDIA and AMD revealed their latest and greatest graphics cards at the end of 2020: NVIDIA with its GeForce RTX 30 Series, based on the new Ampere architecture; and AMD with its Radeon RX 6000 family, based around the RDNA2 architecture (also found in the new Xbox and Playstation consoles). So, which card should you buy
Read on to find which GPUs will work best for you.
The RTX 3080 is the #1 Choice for Ray Tracing
Ray tracing can take a flat and unconvincing looking game and make it pop with realistic lighting and true-to-life reflections.
Lighting, reflections and shadows all get a serious upgrade from ray tracing, due to the technology’s ability to accurately map how virtual light rays bounce around a 3D environment.
Unfortunately, the tech is also hugely demanding, slashing frame rates and possibly turning buttery smooth gameplay into a frustrating slideshow.
Anyone interested in how beautiful games can look with ray tracing should invest in one of NVIDIA’s GeForce 30 series cards (RTX 3060, RTX 3070, RTX 3080, RTX 3090), which pack 2nd generation Ray Tracing Cores that accelerate ray tracing calculations, greatly reducing the strain placed on the card.
While the Radeon RX 6000 series cards have their own built-in ray tracing accelerators, in benchmarks these perform significantly worse than NVIDIA’s 30 series. Remedy’s Control has one of the most demanding implementations of ray tracing available in a game, and tests by journalists at Digital Foundry found that frame rates were 48% higher for the GeForce RTX 3080 than the flagship Radeon RX 6900XT, a significant difference given that the AMD card is slated to cost substantially more than NVIDIA’s 3080.
Image source: NVIDIA
Want great performance at an affordable price? Get a Radeon RX 6800
Gaming isn’t just about looks, it’s also about performance. The responsiveness that comes from getting a few extra frames per second can make the difference between a win and a loss in multiplayer gaming. And with the advent of 120Hz+ monitors, the need for powerful graphics cards that can pump out additional frames has never been greater.
For the more price-conscious gamer, this is where AMD Radeon RX 6000 series cards get a chance to shine.
Across the majority of games the Radeon RX 6800 was found to deliver smoother gameplay compared to the slightly cheaper NVIDIA GeForce RTX 3070. Diving into the numbers, Digital Foundry found that the 6800 delivered 10-15 percent better performance (higher frame rates) than the 3070 in most of the games tested at 4K resolution, and also beat the NVIDIA card in every test at 1440p.
Image source: AMD
The Mid-Level GPUs are the Most Power Efficient
If you’re wondering which family of cards is more efficient and will save you the most on your electricity bills, then the answer is: it depends on the workload and the card.
The stated power consumption, the TDP, for equivalent cards in the Radeon RX 6000 series and NVIDIA GeForce RTX 30 series is broadly similar, topping out at 300W for the Radeon RX 6900XT and 350W for the higher performing NVIDIA GeForce RTX 3090.
In terms of which cards consume the least power for the work they’re doing, then both the Radeon RX 6800 and the Geforce RTX 3070 seem like a decent choice.
In tests of the amount of energy it took each card to render a single frame of a game, the cards were broadly neck and neck, with the joules of energy consumed per frame ranging from 2.6 to 3.8 depending on the game.
And, if you’re upgrading from an earlier generation of GPU (the NVIDIA Geforce 20 series or Radeon RX 5000 series), then tests have shown the newer cards doing up to 150% more work for the same energy cost.
NVIDIA GeForce RTX 30 Series Taps Machine Learning for Smooth, Crisp Gameplay
While the Radeon RX 6000 series offers robust performance, an ace up the NVIDIA 30 series’ sleeve comes from its Deep Learning Super Sampling (DLSS) feature.
DLSS helps keep gameplay smooth and responsive without having to sacrifice much image quality.
The feature taps into machine-learning accelerators called Tensor Cores that sit on the NVIDIA cards to upscale game resolution. For example, DLSS allows cards to render a game at a much less demanding 1080 resolution while the image it outputs looks very similar to 4K.
The upscaled images have been found to be very close to native resolution quality in testing and to produce a cleaner image than competing upscaling technologies.
The upshot is improved performance at high resolutions, more than doubling performance in certain games. In some games, using DLSS is the only way to get smooth gameplay at 4K, with an early build of the hugely demanding Cyberpunk 2077 only able to get an average framerate of 60FPS with DLSS enabled on a NVIDIA GeForce RTX 3090 card.
The downside is that DLSS is only available on NVIDIA cards and has to be supported on a game-by-game basis, so it isn't some magic solution to be used on every title. Nevertheless, it’s an impressive technology, with AMD reportedly working on a similar feature called FidelityFX Super Resolution.
NVIDIA 30 Series Graphic Cards are Machine Learning Powerhouses
Graphics cards do far more than play games in 2020, they are also the engine that drives machine learning, where a computer system learns how to perform a task, rather than being programmed.
When it comes to machine learning, NVIDIA cards are still the natural choice, due to the maturity of NVIDIA’s ML toolset, in particular the cards’ CUDA API for parallel computing.
But which of the 30 series cards should you buy? Particularly if you already own one of the previous generation 20 series cards.
Even though the 30 series cards may appear to have taken a step back in some regards, for example by limiting the onboard memory on the 3080 to 10GB, the specs don’t tell the full story.
While the 3080 has less memory than GeForce RTX 2080 Ti, the 3080 uses GDDR6X memory, which has double the bandwidth of the GDDR6 VRAM found on the 2080 Ti. The 3080 also has fewer Tensor Cores -- cores tailored to handling common machine learning inference and training workloads -- than found on the 2080 Ti. However, the 3080 sports third generation cores that double the data throughput compared to those on the 2080 Ti.
The upshot is that the RTX 3080 is capable of outperforming even the considerably more expensive Titan RTX for certain ML workloads, for instance when implementing an FP32 version of a ResNet50 architecture neural network using version 1.15 of the TensorFlow machine-learning library. In this test, the 3080 was able to handle 462 images per second, compared to the Titan’s 373 images per second.
Those worried about being forced to run small batches when training larger ML models due to the 3080’s lower memory can pick up a RTX 3090, which sports 24GB VRAM, albeit for a premium of almost $1,000 over the 3080.
If you're interested in an AI workstation sporting an RTX 30 Series GPU, you can find them starting at a very reasonable $3,700 here.
So, to wrap it up, picking the right GPU actually boils down to what you're going to use it for, and what's most important to you if you're using it for gaming.
If you still have questions that we can answer for your particular situation, feel free to contact us and tell us what you're trying to do.