Nvidia is reportedly gearing up to launch the first few cards in its RTX 50-series at CES next week, including an RTX 5090, RTX 5080, RTX 5070 Ti, and RTX 5070. The 5090 will be of particular interest to performance-obsessed, money-is-no-object PC gaming fanatics since it’s the first new GPU in over two years that can beat the performance of 2022’s RTX 4090.
But boosted performance and slower advancements in chip manufacturing technology mean that the 5090’s maximum power draw will far outstrip the 4090’s, according to leakers. VideoCardz reports that the 5090’s thermal design power (TDP) will be set at 575 W, up from 450 W for the already power-hungry RTX 4090. The RTX 5080’s TDP is also increasing to 360 W, up from 320 W for the RTX 4080 Super.
That also puts the RTX 5090 close to the maximum power draw available over a single 12VHPWR connector, which is capable of delivering up to 600 W of power (though once you include the 75 W available via the PCI Express slot on your motherboard, the actual maximum possible power draw for a GPU with a single 12VHPWR connector is a slightly higher 675 W).
Higher peak power consumption doesn’t necessarily mean that these cards will always draw more power during actual gaming than their 40-series counterparts. And their performance could be good enough that they could still be very efficient cards in terms of performance per watt.
But if you’re considering an upgrade to an RTX 5090 and these power specs are accurate, you may need to consider an upgraded power supply along with your new graphics card. Nvidia recommends at least an 850 W power supply for the RTX 4090 to accommodate what the GPU needs while leaving enough power left over for the rest of the system. An additional 125 W bump suggests that Nvidia will recommend a 1,000 W power supply as the minimum for the 5090.
We’ll probably know more about Nvidia’s next-gen cards after its CES keynote, currently scheduled for 9: 30 pm Eastern/6: 30 pm Pacific on Monday, January 6.
Back in 2013, Nvidia introduced a new technology called G-Sync to eliminate screen tearing and stuttering effects and reduce input lag when playing PC games. The company accomplished this by tying your display’s refresh rate to the actual frame rate of the game you were playing, and similar variable refresh-rate (VRR) technology has become a mainstay even in budget monitors and TVs today.
The issue for Nvidia is that G-Sync isn’t what has been driving most of that adoption. G-Sync has always required extra dedicated hardware inside of displays, increasing the costs for both users and monitor manufacturers. The VRR technology in most low-end to mid-range screens these days is usually some version of the royalty-free AMD FreeSync or the similar VESA Adaptive-Sync standard, both of which provide G-Sync’s most important features without requiring extra hardware. Nvidia more or less acknowledged that the free-to-use, cheap-to-implement VRR technologies had won in 2019 when it announced its “G-Sync Compatible” certification tier for FreeSync monitors. The list of G-Sync Compatible screens now vastly outnumbers the list of G-Sync and G-Sync Ultimate screens.
Today, Nvidia is announcing a change that’s meant to keep G-Sync alive as its own separate technology while eliminating the requirement for expensive additional hardware. Nvidia says it’s partnering with chipmaker MediaTek to build G-Sync capabilities directly into scaler chips that MediaTek is creating for upcoming monitors. G-Sync modules ordinarily replace these scaler chips, but they’re entirely separate boards with expensive FPGA chips and dedicated RAM.
These new MediaTek scalers will support all the same features that current dedicated G-Sync modules do. Nvidia says that three G-Sync monitors with MediaTek scaler chips inside will launch “later this year”: the Asus ROG Swift PG27AQNR, the Acer Predator XB273U F5, and the AOC AGON PRO AG276QSG2. These are all 27-inch 1440p displays with maximum refresh rates of 360 Hz.
As of this writing, none of these companies has announced pricing for these displays—the current Asus PG27AQN has a traditional G-Sync module and a 360 Hz refresh rate and currently goes for around $800, so we’d hope for the new version to be significantly cheaper to make good on Nvidia’s claim that the MediaTek chips will reduce costs (or, if they do reduce costs, whether monitor makers are willing to pass those savings on to consumers).
For most people most of the time, there won’t be an appreciable difference between a “true” G-Sync monitor and one that uses FreeSync or Adaptive-Sync, but there are still a few fringe benefits. G-Sync monitors support a refresh rate between 1 and the maximum refresh rate of the monitor, whereas FreeSync and Adaptive-Sync stop working on most displays when the frame rate drops below 40 or 48 frames per second. All G-Sync monitors also support “variable overdrive” technology to help eliminate display ghosting, and the new MediaTek-powered displays will support the recent “G-Sync Pulsar” feature to reduce blur.
In July 2023, AMD released a new GPU called the “Radeon RX 7900 GRE” in China. GRE stands for “Golden Rabbit Edition,” a reference to the Chinese zodiac, and while the card was available outside of China in a handful of pre-built OEM systems, AMD didn’t make it widely available at retail.
That changes today—AMD is launching the RX 7900 GRE at US retail for a suggested starting price of $549. This throws it right into the middle of the busy upper-mid-range graphics card market, where it will compete with Nvidia’s $549 RTX 4070 and the $599 RTX 4070 Super, as well as AMD’s own $500 Radeon RX 7800 XT.
We’ve run our typical set of GPU tests on the 7900 GRE to see how it stacks up to the cards AMD and Nvidia are already offering. Is it worth buying a new card relatively late in this GPU generation, when rumors point to new next-gen GPUs from Nvidia, AMD, and Intel before the end of the year? Can the “Golden Rabbit Edition” still offer a good value, even though it’s currently the year of the dragon?
Meet the 7900 GRE
RX 7900 XT
RX 7900 GRE
RX 7800 XT
RX 6800 XT
RX 6800
RX 7700 XT
RX 6700 XT
RX 6750 XT
Compute units (Stream processors)
84 (5,376)
80 (5,120)
60 (3,840)
72 (4,608)
60 (3,840)
54 (3,456)
40 (2,560)
40 (2,560)
Boost Clock
2,400 MHz
2,245 MHz
2,430 MHz
2,250 MHz
2,105 MHz
2,544 MHz
2,581 MHz
2,600 MHz
Memory Bus Width
320-bit
256-bit
256-bit
256-bit
256-bit
192-bit
192-bit
192-bit
Memory Clock
2,500 MHz
2,250 MHz
2,438 MHz
2,000 MHz
2,000 MHz
2,250 MHz
2,000 MHz
2,250 MHz
Memory size
20GB GDDR6
16GB GDDR6
16GB GDDR6
16GB GDDR6
16GB GDDR6
12GB GDDR6
12GB GDDR6
12GB GDDR6
Total board power (TBP)
315 W
260 W
263 W
300 W
250 W
245 W
230 W
250 W
The 7900 GRE slots into AMD’s existing lineup above the RX 7800 XT (currently $500-ish) and below the RX 7900 (around $750). Technologically, we’re looking at the same Navi 31 GPU silicon as the 7900 XT and XTX, but with just 80 of the compute units enabled, down from 84 and 96, respectively. The normal benefits of the RDNA3 graphics architecture apply, including hardware-accelerated AV1 video encoding and DisplayPort 2.1 support.
The 7900 GRE also includes four active memory controller die (MCD) chiplets, giving it a narrower 256-bit memory bus and 16GB of memory instead of 20GB—still plenty for modern games, though possibly not quite as future-proof as the 7900 XT. The card uses significantly less power than the 7900 XT and about the same amount as the 7800 XT. That feels a bit weird, intuitively, since slower cards almost always consume less power than faster ones. But it does make some sense; pushing the 7800 XT’s smaller Navi 32 GPU to get higher clock speeds out of it is probably making it run a bit less efficiently than a larger Navi 31 GPU die that isn’t being pushed as hard.
When we reviewed the 7800 XT last year, we noted that its hardware configuration and performance made it seem more like a successor to the (non-XT) Radeon RX 6800, while it just barely managed to match or beat the 6800 XT in our tests. Same deal with the 7900 GRE, which is a more logical successor to the 6800 XT. Bear that in mind when doing generation-over-generation comparisons.
If there’s been one consistent criticism of Nvidia’s RTX 40-series graphics cards, it’s been pricing. All of Nvidia’s product tiers have seen their prices creep up over the last few years, but cards like the 4090 raised prices to new heights, while lower-end models like the 4060 and 4060 Ti kept pricing the same but didn’t improve performance much.
Today, Nvidia is sprucing up its 4070 and 4080 tiers with a mid-generation “Super” refresh that at least partially addresses some of these pricing problems. Like older Super GPUs, the 4070 Super, 4070 Ti Super, and 4080 Super use the same architecture and support all the same features as their non-Super versions, but with bumped specs and tweaked prices that might make them more appealing to people who skipped the originals.
The 4070 Super will launch first, on January 17, for $599. The $799 RTX 4070 Ti Super launches on January 24, and the $999 4080 Super follows on January 31.
RTX 4090
RTX 4080
RTX 4080 Super
RTX 4070 Ti
RTX 4070 Ti Super
RTX 4070
RTX 4070 Super
CUDA Cores
16,384
9,728
10,240
7,680
8,448
5,888
7,168
Boost Clock
2,520 MHz
2,505 MHz
2,550 MHz
2,610 MHz
2,610 MHz
2,475 MHz
2,475 MHz
Memory Bus Width
384-bit
256-bit
256-bit
192-bit
256-bit
192-bit
192-bit
Memory Clock
1,313 MHz
1,400 MHz
1,437 MHz
1,313 MHz
1,313 MHz
1,313 MHz
1,313 MHz
Memory size
24GB GDDR6X
16GB GDDR6X
16GB GDDR6X
12GB GDDR6X
16GB GDDR6X
12GB GDDR6X
12GB GDDR6X
TGP
450 W
320 W
320 W
285 W
285 W
200 W
220 W
Of the three cards, the 4080 Super probably brings the least significant spec bump, with a handful of extra CUDA cores and small clock speed increases but the same amount of memory and the same 256-bit memory interface. Its main innovation is its price, which at $999 is $200 lower than the original 4080’s $1,199 launch price. This doesn’t make it a bargain—we’re still talking about a $1,000 graphics card—but the 4080 Super feels like a more proportionate step down from the 4090 and a good competitor to AMD’s flagship Radeon RX 7900 XTX.
The 4070 Ti Super stays at the same $799 price as the 4070 Ti (which, if you’ll recall, was nearly launched at $899 as the “RTX 4080 12GB“) but addresses two major gripes with the original by stepping up to a 256-bit memory interface and 16GB of RAM. It also picks up some extra CUDA cores, while staying within the same power envelope as the original 4070 Ti. These changes should help it keep up with modern 4K games, where the smaller pool of memory and narrower memory interface of the original 4070 Ti could sometimes be a drag on performance.
Finally, we get to the RTX 4070 Super, which also keeps the 4070’s $599 price tag but sees a substantial uptick in processing hardware, from 5,888 CUDA cores to 7,168 (the power envelope also increases, from 200 W to 220 W). The memory system remains unchanged. The original 4070 was already a decent baseline for entry-level 4K gaming and very good 1440p gaming, and the 4070 Super should make 60 FPS 4K attainable in even more games.
Nvidia says that the original 4070 Ti and 4080 will be phased out. The original 4070 will stick around at a new $549 price, $50 less than before, but not particularly appealing compared to the $599 4070 Super. The 4090, 4060, and the 8GB and 16GB versions of the 4060 Ti all remain available for the same prices as before.
Nvidia’s performance comparisons focus mostly on older-generation cards rather than the non-Super versions, and per usual for 40-series GPU announcements, they lean heavily on performance numbers that are inflated by DLSS 3 frame generation. In terms of pure rendering performance, Nvidia says the 4070 Super should outperform an RTX 3090—impressive, given that the original 4070 was closer to an RTX 3080. The RTX 4080 Super is said to be roughly twice as fast as an RTX 3080, and Nvidia says the RTX 4070 Ti Super will be roughly 2.5 times faster than a 3070 Ti.
Though all three of these cards provide substantially more value than their non-Super predecessors at the same prices, the fact remains that prices have still gone up compared to past generations. Nvidia last released a Super refresh during the RTX 20-series back in 2019; the RTX 2080 Super went for $699 and the 2070 Super for $499. But the 4080 Super, 4070 Ti Super, and 4070 Super will give you more for your money than you could get before, which is at least a move in the right direction.
In many ways, 2023 was a long-awaited return to normalcy for people who build their own gaming and/or workstation PCs. For the entire year, most mainstream components have been available at or a little under their official retail prices, making it possible to build all kinds of PCs at relatively reasonable prices without worrying about restocks or waiting for discounts. It was a welcome continuation of some GPU trends that started in 2022. Nvidia, AMD, and Intel could release a new GPU, and you could consistently buy that GPU for roughly what it was supposed to cost.
That’s where we get into how frustrating 2023 was for GPU buyers, though. Cards like the GeForce RTX 4090 and Radeon RX 7900 series launched in late 2022 and boosted performance beyond what any last-generation cards could achieve. But 2023’s midrange GPU launches were less ambitious. Not only did they offer the performance of a last-generation GPU, but most of them did it for around the same price as the last-gen GPUs whose performance they matched.
The midrange runs in place
Not every midrange GPU launch will get us a GTX 1060—a card roughly 50 percent faster than its immediate predecessor and beat the previous-generation GTX 980 despite costing just a bit over half as much money. But even if your expectations were low, this year’s midrange GPU launches have been underwhelming.
The worst was probably the GeForce RTX 4060 Ti, which sometimes struggled to beat the card it replaced at around the same price. The 16GB version of the card was particularly maligned since it was $100 more expensive but was only faster than the 8GB version in a handful of games.
The regular RTX 4060 was slightly better news, thanks partly to a $30 price drop from where the RTX 3060 started. The performance gains were small, and a drop from 12GB to 8GB of RAM isn’t the direction we prefer to see things move, but it was still a slightly faster and more efficient card at around the same price. AMD’s Radeon RX 7600, RX 7700 XT, and RX 7800 XT all belong in this same broad category—some improvements, but generally similar performance to previous-generation parts at similar or slightly lower prices. Not an exciting leap for people with aging GPUs who waited out the GPU shortage to get an upgrade.
The best midrange card of the generation—and at $600, we’re definitely stretching the definition of “midrange”—might be the GeForce RTX 4070, which can generally match or slightly beat the RTX 3080 while using much less power and costing $100 less than the RTX 3080’s suggested retail price. That seems like a solid deal once you consider that the RTX 3080 was essentially unavailable at its suggested retail price for most of its life span. But $600 is still a $100 increase from the 2070 and a $220 increase from the 1070, making it tougher to swallow.
In all, 2023 wasn’t the worst time to buy a $300 GPU; that dubious honor belongs to the depths of 2021, when you’d be lucky to snag a GTX 1650 for that price. But “consistently available, basically competent GPUs” are harder to be thankful for the further we get from the GPU shortage.
Marketing gets more misleading
If you just looked at Nvidia’s early performance claims for each of these GPUs, you might think that the RTX 40-series was an exciting jump forward.
But these numbers were only possible in games that supported these GPUs’ newest software gimmick, DLSS Frame Generation (FG). The original DLSS and DLSS 2 improve performance by upsampling the images generated by your GPU, generating interpolated pixels that make lower-res image into higher-res ones without the blurriness and loss of image quality you’d get from simple upscaling. DLSS FG generates entire frames in between the ones being rendered by your GPU, theoretically providing big frame rate boosts without requiring a powerful GPU.
The technology is impressive when it works, and it’s been successful enough to spawn hardware-agnostic imitators like the AMD-backed FSR 3 and an alternate implementation from Intel that’s still in early stages. But it has notable limitations—mainly, it needs a reasonably high base frame rate to have enough data to generate convincing extra frames, something that these midrange cards may struggle to do. Even when performance is good, it can introduce weird visual artifacts or lose fine detail. The technology isn’t available in all games. And DLSS FG also adds a bit of latency, though this can be offset with latency-reducing technologies like Nvidia Reflex.
As another tool in the performance-enhancing toolbox, DLSS FG is nice to have. But to put it front-and-center in comparisons with previous-generation graphics cards is, at best, painting an overly rosy picture of what upgraders can actually expect.