Techno Blender
Digitally Yours.

NVIDIA brings massive updates to flagship AI chips, make it more efficient to handle larger systems

0 23


NVIDIA has updated its flagship AI chip, the H100 and unveiled the all new H200 AI chips. The new AI chips come with increased high-bandwidth memory among other enhancements, and will be launched in 2024

AI chip manufacturing leader and one of the biggest GPU makers, NVIDIA has announced the upcoming release of its advanced artificial intelligence (AI) chip, the H200, scheduled to launch next year.

The H200, which is being positioned as a massive upgrade over the existing top-tier H100 chip, comes with key enhancements, with a primary focus on increased high-bandwidth memory—a pivotal component determining the chip’s rapid data processing capabilities.

NVIDIA’s dominance in the AI chip market, powering notable services like OpenAI’s ChatGPT, makes this announcement significant for various sectors relying on generative AI applications.

Related Articles

Collateral

Collateral Damage: NVIDIA shares drop after Chinese tech firms cancel orders worth $5 billion

Collateral

Micron, ASML, Samsung cosy up to Chinese regime, attend China International Import Expo, NVIDIA abstains

The augmented high-bandwidth memory and accelerated connection to the chip’s processing elements promise swifter response times for AI services, enabling quicker generation of human-like responses to user queries.

The H200 chip features an impressive 141 gigabytes of high-bandwidth memory, a substantial upgrade from the 80 gigabytes found in its predecessor, the H100.

While NVIDIA has not disclosed the memory suppliers for the new chip, Micron Technology expressed its intent in September to become a supplier for NVIDIA. NVIDIA also sources memory from SK Hynix, which reported a sales resurgence in the AI chip sector last month.

In a strategic move, NVIDIA has secured partnerships with major cloud service providers, ensuring rapid integration of the H200 chip into the market.

Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure are among the notable providers set to offer access to the H200 chips. Additionally, speciality AI cloud service providers, including CoreWeave, Lambda, and Vultr, will also be part of the initial rollout.

The H200’s anticipated debut in collaboration with tech giants highlights the chip’s potential impact on diverse industries relying on cutting-edge AI capabilities. As AI applications continue to evolve, NVIDIA commitment to advancing AI hardware positions the H200 as a pivotal player in the unfolding landscape of artificial intelligence.

(With input from agencies)


NVIDIA brings massive updates to flagship AI chips, make it more efficient to handle larger systems

NVIDIA has updated its flagship AI chip, the H100 and unveiled the all new H200 AI chips. The new AI chips come with increased high-bandwidth memory among other enhancements, and will be launched in 2024

AI chip manufacturing leader and one of the biggest GPU makers, NVIDIA has announced the upcoming release of its advanced artificial intelligence (AI) chip, the H200, scheduled to launch next year.

The H200, which is being positioned as a massive upgrade over the existing top-tier H100 chip, comes with key enhancements, with a primary focus on increased high-bandwidth memory—a pivotal component determining the chip’s rapid data processing capabilities.

NVIDIA’s dominance in the AI chip market, powering notable services like OpenAI’s ChatGPT, makes this announcement significant for various sectors relying on generative AI applications.

Related Articles

Collateral

Collateral Damage: NVIDIA shares drop after Chinese tech firms cancel orders worth $5 billion

Collateral

Micron, ASML, Samsung cosy up to Chinese regime, attend China International Import Expo, NVIDIA abstains

The augmented high-bandwidth memory and accelerated connection to the chip’s processing elements promise swifter response times for AI services, enabling quicker generation of human-like responses to user queries.

The H200 chip features an impressive 141 gigabytes of high-bandwidth memory, a substantial upgrade from the 80 gigabytes found in its predecessor, the H100.

While NVIDIA has not disclosed the memory suppliers for the new chip, Micron Technology expressed its intent in September to become a supplier for NVIDIA. NVIDIA also sources memory from SK Hynix, which reported a sales resurgence in the AI chip sector last month.

In a strategic move, NVIDIA has secured partnerships with major cloud service providers, ensuring rapid integration of the H200 chip into the market.

Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure are among the notable providers set to offer access to the H200 chips. Additionally, speciality AI cloud service providers, including CoreWeave, Lambda, and Vultr, will also be part of the initial rollout.

The H200’s anticipated debut in collaboration with tech giants highlights the chip’s potential impact on diverse industries relying on cutting-edge AI capabilities. As AI applications continue to evolve, NVIDIA commitment to advancing AI hardware positions the H200 as a pivotal player in the unfolding landscape of artificial intelligence.

(With input from agencies)

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment