The GeForce RTX 50 will continue to use a monolithic GPU
July 21, 2022
0
The jump to the chiplet in the GPU sector seems inevitable, that is, it will happen sooner or later. AMD has already made this leap in the professional
The jump to the chiplet in the GPU sector seems inevitable, that is, it will happen sooner or later. AMD has already made this leap in the professional sector, while NVIDIA has continued to bet on a monolithic core, which according to the latest rumors we’ve seen, it won’t change for the GeForce RTX 50.
The GeForce RTX 50 family will the successor to the GeForce RTX 40 series, a generation that has not yet been announced, but which we know quite well thanks to the various leaks that have been produced. We know that they will use a monolithic core design and that the most powerful chip for the general consumer market, tentatively known as the AD102, will have 18,432 shaders in its full version.
So GeForce RTX 50 would maintain a monolithic core design which we will see in the GeForce RTX 40. These, in turn, will be an evolution of the core that NVIDIA used in the GeForce RTX 30, based on the Ampere architecture, which tells us many things if we know how to read between the lines:
The division of specialized cores (tensor and RT) will continue to be present.
Reducing processes will be key to increasing the number of shaders.
We do not expect major changes at the architecture level. Blackwell would be the culmination of a design first used by NVIDIA in Turing, matured in Ampere and refined by Ada Lovelace.
Why does it make sense to keep a monolithic GPU in the GeForce RTX 50?
The most important key is very easy to understand, because it is not so easy to connect two or more GPUs and make them work as one. Go to the MCM design in the graphics sector opens the door to a new stagewhere developer tools and games must be prepared to make optimal use of these designs.
On the other hand, we must add to all of the above problems that can occur when connecting chips, especially in terms of latency, workload distribution, and resource management and utilization. This is not a trivial matter and requires deep work by developers and chip designers to make everything work as it should.
I know what you’re thinking, if the MCM design can give you all these problems, why is it inevitable to abandon the monolithic core design? Well, it’s very simple, because the increasing complexity of GPUs means that they have an ever increasing number of shaders, which along with the ever decreasing manufacturing processes the transfer of these patterns to the wafers is increasingly difficult.
In the end, it’s easier and more cost-effective to make 10,000 GPU shaders and chain two together to make 20,000 GPU shaders than to directly make 20,000 GPU shaders because there’s more risk of something going wrong with the other. its higher complexity. It is more efficient in terms of cost and wafer success.
Alice Smith is a seasoned journalist and writer for Div Bracket. She has a keen sense of what’s important and is always on top of the latest trends. Alice provides in-depth coverage of the most talked-about news stories, delivering insightful and thought-provoking articles that keep her readers informed and engaged.