Sign In  |  Register  |  About Livermore  |  Contact Us

Livermore, CA
September 01, 2020 1:25pm
7-Day Forecast | Traffic
  • Search Hotels in Livermore

  • CHECK-IN:
  • CHECK-OUT:
  • ROOMS:

Nvidia announces new chip to power AI models, reduce costs

Nvidia announced a next-generation computer chip to power artificial intelligence programs and large language models, reducing costs for running AI models while scaling data centers.

Nvidia announced a next-generation computer chip on Tuesday that’s designed to power artificial intelligence (AI) models and keep the company at the leading edge of AI.

Santa Clara, California-based Nvidia has emerged as a powerhouse in the AI space in part thanks to its graphics processing units (GPUs) which are used to train the large language models that power AI software like ChatGPT. Nvidia’s new version of the Grace Hopper Superchip more than triples the amount of memory and bandwidth compared to the company’s current model.

"To meet surging demand for generative AI, data centers require accelerated computing platforms with specialized needs," Nvidia co-founder and CEO Jensen Huang said in a statement. "The new GH200 Grace Hopper Superchip platform delivers this with exceptional memory technology and bandwidth to improve throughput, the ability to connect GPUs to aggregate performance without compromise, and a server design that can be easily deployed across the entire data center."

CHIPMAKERS PUSH BACK ON US RESTRICTIONS ON SEMICONDUCTOR EXPORTS TO CHINA

Nvidia’s previously announced version of the Grace Hopper Superchip is already being offered by leading manufacturers. The next-generation version will be fully compatible with Nvidia’s MGX data center servers, which the company believes will help customers quickly and cost-effectively integrate into their desired server variation. 

Huang said in a keynote address at a computer graphics conference that the next-generation superchip is designed to "scale out the world’s data centers."

WHAT IS ARTIFICIAL INTELLIGENCE (AI)?

He added that it will enhance the ability of AI software to generate content or make predictions – a process known as inference – which will help companies reduce costs as they develop their AI tools. 

"You can take pretty much any large language model you want and put it in this and it will inference like crazy. The inference cost of large language models will drop significantly," Huang explained. 

The company indicated in a press release that deliveries of the next-generation platform are expected to occur in the second quarter of calendar year 2024. Huang said samples will be available toward the end of the year. Nvidia plans to offer two versions of the platform to customers, including a version with two chips that customers can integrate into their systems, and a complete server system that combines two Grace Hopper designs.

GET FOX BUSINESS ON THE GO BY CLICKING HERE

Nvidia’s market capitalization topped the $1 trillion threshold for the first time in late May 2023.

The company’s share price has soared amid the surge of interest in AI, rising from just over $143 per share to over $446 per share through Tuesday’s close – a gain of about 212% year to date.

Reuters contributed to this report.

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.
 
 
Copyright © 2010-2020 Livermore.com & California Media Partners, LLC. All rights reserved.