Graphics card consumption will keep getting higher and higher and will exceed 700 watts in 2025
July 11, 2022
0
There is no doubt that the consumption of graphics cards has grown significantly in the last two decades. To illustrate, there’s nothing better than a specific example, so
There is no doubt that the consumption of graphics cards has grown significantly in the last two decades. To illustrate, there’s nothing better than a specific example, so let’s go back to 2010, when AMD launched the Radeon HD 6000. The Radeon HD 6970 was the top of the range for this generation, and its peak consumption was approaching 200 watts in games.
2020 saw the arrival of the Radeon RX 6900 XT, one of the most powerful graphics cards of this generation, and when measuring its consumption in games it was seen to be around 320 watts. As we can see, the latter is much more powerful, uses a much higher architecture and is manufactured on TSMC’s 7nm node, while the former used the veteran 40nm node.
Node jumps and new architectures are no longer enough to keep consumption at lower levels compared to previous generations, and thanks to AMD we know that this reality will go further with the next graphics generations. To illustrate this point, Sunnyvale has published a chart in which we can see the curious evolution that the high-performance graphics card sector has seen over the years.
Between 2005 and 2015, we experienced a relatively stable decade in which process leaps and new architectures made it possible to create increasingly powerful graphics cards while maintaining stable consumption. Since 2015, there has been an important turning point and consumption has started to grow enormously, a trend that will continue to grow, as we said, and that in 2025 we could find graphics cards that consume more than 700 watts.
The next-generation graphics card will have the same consumption as a current full-fledged computer
At least according to AMD graphics, and the truth is that it’s really impressive, but we need to give it the right context to better understand what’s going on and why this trend towards higher consumption makes sense and is “acceptable”. In this sense, we must keep in mind that:
The GPU is an increasingly complex component whose performance is growing exponentially. The higher the power, the higher the consumption, and this can perfectly imply higher efficiency if the rule of higher power per watt consumed compared to the previous generation is met.
GPU designs are no longer limited to the traditional concept, they now include specialized cores that add more complexity and trigger consumption, and we also find other performance-enhancing elements, such as AMD’s infinite cache.
Increasingly faster graphics memory is also used, which improves performance and eventually leads to consumption, as is evident. In this sense, the case of NVIDIA, which uses GDDR6X memory, stands out.
This trend towards higher consumption makes sense because of the three points we’ve seen, and it’s acceptable because it’s accompanied, as we said, by higher power per watt compared to the previous generation, and because we’re at a stage where Ray tracing and AI applied to games have become two key pillars that accompany the classic grid. The games are getting more and more challengingusers want more performance, and certain sacrifices must be made to optimally meet those needs.
Sam Naffziger, Senior Vice President, Corporate Partner and Product Technology Architect at AMD, delved into this topic by talking about slow jumps to more advanced production nodesof increasing the occupied area at the silicon levela direct result of the greater complexity of the GPU that I already explained to you, and also confirmed that they are analyzing the approach that NVIDIA follows and that they want to “do it much better” than it.
We’ll see how things evolve, but it’s clear that we’re approaching the limits of traditional computing in terms of efficiency, and that the shift to new designs at the hardware level and new tools at the software level will be key. moving forward and improving in this regard. One of the most important developments we can expect in the medium term will be moving to a modular design in the GPU sectorwhich is the path AMD took before NVIDIA.
Alice Smith is a seasoned journalist and writer for Div Bracket. She has a keen sense of what’s important and is always on top of the latest trends. Alice provides in-depth coverage of the most talked-about news stories, delivering insightful and thought-provoking articles that keep her readers informed and engaged.