When AMD announced the Radeon RX 6600 XT a few months ago, it was positioned as the ideal 1080p gaming card, with the potential to offer decent 1,440p performance in certain games. Now there's the lower-tier RX 6600 and the story is pretty much the same — except, you know, worse. I suppose the existence of an "XT" card implies a more mainstream version eventually. But after testing out the RX 6600 for the past week, I'm still wondering who this card is for.
Of course, that's a tough question to answer when the GPU market is so volatile and card prices vary wildly. AMD says the RX 6600's suggested retail price is $329, compared to $379 for the 6600XT. But given the global chip shortage and resellers hungry for more GPUs, those prices are purely conceptual. In the real world, the 6600XT now sells for upwards of $600 (and in some cases close to $800!). The 6600 is also competing against the RTX 3060, which also has an MSRP of $329, but is now selling between $800 and $1,020. So much for budget GPUs.
AMD, a company with a reputation for creating budget-friendly cards that packed a decent punch, probably wanted to stay true to its roots. But unless it can guarantee a price close to MSRP, the RX 6600 just seems out of place in today's gaming landscape. As you'll see in our testing, it's a capable 1080p gaming card. But its ray tracing performance is terrible, and it can't take advantage of NVIDIA's DLSS technology, which uses AI to boost performance.
I'll be honest, I didn't really expect much from the RX 6600 from the start. Under the hood, its RDNA 2 architecture is powered by 28 compute units and 1,792 stream processors, a noticeable step down from the 6600XT's 32 CUs and 2,048 stream processors. There's also a serious speed difference: the cheaper card has 2,044 MHz game clock and 2,491 MHz boost, compared to 2,359 MHz while gaming on the 6600 XT and a 2,589 MHz boost. Both cards have 8GB of GDDR6 RAM, but the 6600's memory bandwidth is 34 GB/s slower at 224 GB/s.
3DMark TimeSpy |
Destiny 2 |
Hitman 3 |
Port Royal (ray tracing) |
|
AMD Radeon RX 6600 |
8,521 |
1080p: 110-120 | 1440p: 75-85 |
1080p: 138 | 1440p: 94 |
3,846/17fps |
AMD Radeon RX 6600XT |
9,872 |
1080p: 130-150 | 1440p: 85-105 |
1080p: 146 | 1440p: 110 |
4,5824/32.22fps |
AMD Radeon RX 6700 XT |
11,198 |
1440p: 75-100fps 4K: 50-75fps |
N/A |
5,920/27.4fps |
NVIDIA RTX 3060 Ti |
11,308 |
1440p: 85-110fps 4K: 45-60fps |
N/A |
6,989/32.36fps |
Given those specs, I predicted the RX 6600 would be a decent 1080p card and not much else. And for the most part, that's what my testing proved: It reached a solid 120FPS in Destiny 2 while playing in 1080p with maxed out graphics. Once I pushed the game to 1,440p, though, it fell to 80fps. That pattern held true for pretty much everything I tested. Hitman 3's benchmark reached a respectable 138fps in 1080p with graphics settings cranked to the maximum, but only 94fps in 1,440p.
If you've got an AMD Ryzen 5000 CPU (or some 3000 models), the RX 6600 will be a slight upgrade thanks to Smart Access Memory. That's a feature that basically lets your CPU directly address all of your video card's RAM, and it's something you can't use at all if you've got an Intel CPU. I had SAM enabled on my testing rig, which was powered by a Ryzen 7 5800X and 32GB of RAM, in case you were wondering.
Both the Radeon RX 6600 and 6600 XT had a hard time competing against their NVIDIA counterparts in our benchmarks. The RTX 3060 Ti reached 11,308 in 3DMark Time Spy, whereas the 6600 XT hit 9,872 and the 6600 trailed behind with a score of 8,521. I didn't have an RTX 3060 on-hand to test, but 3DMark's verified benchmarks with similar systems show scores of around 10,000.
Ray tracing was also a lost cause with the 6,600 — just flipping on ray traced reflections in Control slowed the game to a meager 35fps. Without ray tracing, it was at least playable in 1080p, hitting between 60 and 70fps. Now Control is a notoriously tough game on GPUs, but at least NVIDIA cards let me get decent framerates with ray tracing thanks to DLSS, which uses AI processing to upscale the game from a lower resolution. AMD's alternative technology, FidelityFX Super Resolution, isn't supported in Control yet. That solution is also cross-compatible with NVIDIA cards, so you won't need a Radeon GPU to take advantage of it — which again makes me wonder, why would you get the RX 6600 instead of the 3060?
I could see it being a worthwhile card for a very specific gamer: Someone who has a small case and doesn't want to upgrade beyond a 450 watt power supply. The RX 6600 has a total board power demand of 132 watts, compared to 170 watts on the RTX 3060. That's a major reason why it runs so cool, reaching only around 70 celsius under load (the 3060 typically runs between 70 and 75 celsius when stressed). Still, I can't imagine that someone who wants to shell out $329 for a GPU (and realistically much more), would limit themselves based on a weak power supply.
A good PC upgrade is one that'll last you for years, and, unfortunately, I can't imagine that'll be true of the RX 6600. Solid 1080p performance is a nice feature to have today, but 1,440p monitors are getting cheaper and games are becoming more demanding. Who knows if the 6600 will be able to handle a flagship title in a few years even at 1080p. And if you end up upgrading to a 1,440p, ultrawide, or 4K screen over the next few years, you'll have to upgrade immediately.
The Radeon RX 6600 could be a decent contender if the GPU market stabilizes and AMD pushes the price below $300. But for now, it’s a misfire that only makes sense if you can’t get your hands on an RTX 3060.