Nvidia CEO Jensen Huang unveils next-gen ‘Blackwell’ chip family at GTC
Huang opened his presentation with an overview of the increasing size of AI workloads, noting that the most powerful chips would spend 30 billion seconds, or 1,000 years to train. NvidiaNvidia's H100 GPU, the current state of the art chip, delivers on the order of 2,000 trillion floating-point operations per second, or, 2,000 TFLOPS. A thousand TFLOPS is equal to one petaFLOP, ergo, the H100, and its sibling, H200, can manage only a couple of petaFLOPS, far below the 30 billion to which Huang referred.Also: Making!-->>!-->>…