Techno Blender
Digitally Yours.

Nvidia’s Hopper H100 SXM5 Pictured: Monstrous GPU Has Brutal VRM Config

0 106


Modern compute GPUs are tailored to deliver incredible performance at any cost, so their power consumption and cooling requirements are pretty enormous. Nvidia’s latest H100 compute GPU based on the Hopper architecture can consume up to 700W in a bid to deliver up to 60 FP64 Tensor TFLOPS, so it was clear from the start that we were dealing with a rather monstrous SXM5 module design. Yet, Nvidia has never demonstrated it up and close. 

Our colleagues from ServeTheHome, who were lucky enough to visit one of Nvidia’s offices and see an H100 SXM5 module themselves, on Thursday published a photo of the compute GPU. These SXM5 cards are designed for Nvidia’s own DGX H100 and DGX SuperPod high-performance computing (HPC) systems as well as machines designed by third parties. These modules will not be available separately in retail, so seeing them is a rare opportunity. 

Nvidia’s H100 SXM5 module carries a fully-enabled GH100 compute GPU featuring 80 billion transistors and packing 8448/16896 FP64/FP32 cores as well as 538 Tensor cores (see details about specifications and performance of H100 in the tables below). The GH100 GPU comes with 96GB of HBM3 memory, though because of ECC support and some other factors, users can access 80GB of ECC-enabled HBM3 memory connected using a 5120-bit bus. The particular GH100 compute GPU pictured is A1 revision marked as U8A603.L06 and packaged on the 53rd week of 2021 (i.e., from December 28 to December 31).  

(Image credit: ServeTheHome)


Modern compute GPUs are tailored to deliver incredible performance at any cost, so their power consumption and cooling requirements are pretty enormous. Nvidia’s latest H100 compute GPU based on the Hopper architecture can consume up to 700W in a bid to deliver up to 60 FP64 Tensor TFLOPS, so it was clear from the start that we were dealing with a rather monstrous SXM5 module design. Yet, Nvidia has never demonstrated it up and close. 

Our colleagues from ServeTheHome, who were lucky enough to visit one of Nvidia’s offices and see an H100 SXM5 module themselves, on Thursday published a photo of the compute GPU. These SXM5 cards are designed for Nvidia’s own DGX H100 and DGX SuperPod high-performance computing (HPC) systems as well as machines designed by third parties. These modules will not be available separately in retail, so seeing them is a rare opportunity. 

Nvidia’s H100 SXM5 module carries a fully-enabled GH100 compute GPU featuring 80 billion transistors and packing 8448/16896 FP64/FP32 cores as well as 538 Tensor cores (see details about specifications and performance of H100 in the tables below). The GH100 GPU comes with 96GB of HBM3 memory, though because of ECC support and some other factors, users can access 80GB of ECC-enabled HBM3 memory connected using a 5120-bit bus. The particular GH100 compute GPU pictured is A1 revision marked as U8A603.L06 and packaged on the 53rd week of 2021 (i.e., from December 28 to December 31).  

(Image credit: ServeTheHome)

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment