The leader in AI, Nvidia, just dropped a bombshell: the Blackwell B200, the follow-up to their highly sought-after Hopper H100 and GH200 Grace Hopper superchips. This next-generation data center and AI GPU are supposed to make computers much faster.
The Blackwell Architecture Transistor Boost: The B200 GPU has more than twice as many transistors as the B100 GPU at a high level. The B200 is better than the H100 and H200 chips, which had 80 billion transistors each. It has an amazing 208 billion transistors.
AI Speed: The B200 is very fast, with 20 petaflops of AI speed from a single GPU. That is a lot less than the H100’s 4 petaflops of AI computing power.
Memory Power: The B200 has 192GB of HBM3e memory, which gives it an amazing 8 TB/s of speed.
Configuration with two dies
The two-in-one: The Blackwell B200 isn’t like most single GPUs. Instead, it is made up of two closely connected die that work together as a single CUDA GPU. A 10 TB/s NV-HBI connection links these chips together, making sure they work without any problems.
In this case, the use of TSMC’s 4NP process node is what makes the two dies work together. It doesn’t make a big difference in feature density, but it does make the chip more powerful. There are four HBM3e stacks of 24GB on each die, for a total of 96GB. Each stack has 1 TB/s of speed on a 1024-bit link.
What It Means and What to Expect
Data Centers: The Blackwell B200 will change the way data centers work by making AI training and reasoning faster.
Consumer-Class GPUs: Blackwell is planning to make consumer-class GPUs, but they might not come out until 2025 and will be very different from the chips used in data centers.
Links That Matter: