According to new rumors, Nvidia's next-generation Blackwell cards will have neither a wider memory bus nor more VRAM, with the exception of the RTX 5090.

Sports
According to new rumors, Nvidia's next-generation Blackwell cards will have neither a wider memory bus nor more VRAM, with the exception of the RTX 5090.

Nvidia's keynote at Computex 2024 said nothing about next-generation GeForce graphics cards. So for the time being, we will have to browse the usual sources of leaks and rumors to build a picture of what is coming. According to the latest of these, the RTX 5090 will use a 512-bit wide memory bus, but will otherwise be the same as the current RTX 40 series.

The source of this rumor is X's Kopite7kimi, who has earned quite a reputation for making accurate predictions and statements about the future development of GPUs. In a recent post, this leaker revealed the memory configurations for five different Blackwell GPUs that are expected to launch later this year (though some of them may not be announced until 2025).

First is GB202, which will undoubtedly be used for the RTX 5090 and a series of professional-grade graphics cards; the Blackwell monster is said to have a 512-bit wide memory bus and a GDDR7 VRAM chip.

Combine this with Micron's slowest GDDR7 chip, which runs at 28 MT/s, and the total bandwidth would be around 1.8 TB/s, about 77% more bandwidth than the RTX 4090 Even if the RTX 5090's bus is "only" 384 bits, the faster GDDR7 (the RTX 4090 uses 21 MT/sec GDDR6X), the bandwidth increases by 33%.

Kopite7kimi suggests that the other GPU variants are unchanged with respect to memory bus width: GB203 is 256-bit, GB205 is 192-bit, and the lowest GB206 and GB207 are both 128-bit. This is the same as AD103, AD104, AD106, and AD107. However, since most graphics cards using these GPUs use GDDR7, the bandwidth should improve considerably. [It is worth noting that the width of the memory bus not only affects VRAM bandwidth, but also determines the amount of VRAM that can be added to a graphics card. At the moment, Micron's GDDR7 modules are all 32 bits wide and have a density of either 16 GB or 8 GB, so the maximum capacity of a 256-bit bus is 16 GB.

Therefore, other than the RTX 5090, Blackwell's upcoming cards, assuming these specs are correct, will not have more VRAM than the current Ada Lovelace models; if the successor to the RTX 4090 will have a 512-bit memory bus The other point suggested by Kopite7kimi in his post is the internal configuration of the shader blocks in each chip. For example, the 12*8 in GB202 shows the number of GPCs (Graphics Processing Clusters) and how many TPCs (Texture Processing Clusters) are in each GPC.

Since the AD102 has a 12*6 configuration, does this mean that the RTX 5090 has more shaders than the RTX 4090? is a possibility. However, the GPC*TPC numbers do not tell us how many SMs (streaming multiprocessors) are in each TPC, nor how many shaders are in each SM.

On the Ada Lovelace chip, there are two SMs per TPC, for a total of 128 shaders per SM; Nvidia may be using more SMs per TPC, more shaders per SM, or a combination of both on the Blackwell GPU. It is possible that they are using However, this is all speculation right now, so it is best to ignore it all until more details are known.

Nvidia's market share for discrete GPUs, both add-in cards and laptop chips, is so large that releasing a new round of graphics processors that are not much faster than previous products could sell in large numbers.

Blackwell GPUs may turn out to be fundamentally not that much faster than Ada Lovelace GPUs, but thanks to more VRAM bandwidth and perhaps more cache, better AI capabilities, etc., the RTX 50 series is still notably faster than the RTX 40 series cards may still be noticeably better than the RTX 40 series.

Time will of course tell, but for now all we can do is speculate on rumors.

.

Categories