Nvidia launches A100 80GB GPU for supercomputers

Nvidia has introduced its 80GB model of the A100 graphics processing unit (GPU), focused on the graphics and AI chip at supercomputing packages.

The chip is in accordance with the corporate’s Ampere graphics structure and is geared toward serving to companies and govt labs make key selections extra temporarily by way of enabling higher real-time information research. Nvidia made the announcement on the outset of the SC20 supercomputing convention this week.

The 80GB model has two times the reminiscence of its predecessor, which was once offered six months in the past. “We’ve doubled the entirety on this machine to make it simpler for purchasers,” Nvidia government Paresh Kharya mentioned in a press briefing. He additionally mentioned 90% of the sector’s information was once created within the final two years.

The brand new chip supplies researchers and engineers with extra pace and function for his or her AI and medical packages. It delivers over 2 terabytes in keeping with 2nd of reminiscence bandwidth, which allows a machine to feed information extra temporarily to the GPU.

“Supercomputing has modified in profound tactics, increasing from being simply fascinated with simulations to AI supercomputing with data-driven approaches that at the moment are complementing conventional simulations,” Kharya mentioned. He added that Nvidia’s end-to-end way to supercomputing, from workflows for simulation to AI, is vital to stay making advances. Kharya mentioned Nvidia now has 2.three million builders for its more than a few platforms and added that supercomputing is essential for the vanguard of the ones builders.

He additionally mentioned a contemporary simulation of the radical coronavirus simulated 305 million atoms and that the simulation, which used 27,000 Nvidia GPUs, was once the biggest molecular simulation ever finished.

The Nvidia A100 80GB GPU is to be had within the Nvidia DGX A100 and Nvidia DGX Station A100 techniques which are anticipated to send this quarter.

Pc makers Atos, Dell, Fujitsu, Gigabyte, Hewlett Packard Endeavor, Inspur, Lenovo, Quanta, and Supermicro will be offering four-GPU or eight-GPU techniques in accordance with the brand new A100 80GB GPU within the first part of 2021.

Nvidia’s new chip will compete with the brand new AMD Intuition MI100 GPU accelerator Complex Micro Gadgets introduced these days. Against this to AMD, Nvidia has a unmarried GPU structure for each AI and graphics.

Moor Insights & Technique analyst Karl Freund mentioned in an e-mail to VentureBeat that the AMD GPU may give 18% higher efficiency than the unique 40GB A100 from Nvidia. However he mentioned genuine packages would possibly take pleasure in the 80GB Nvidia model. He additionally mentioned that whilst price-sensitive shoppers would possibly desire AMD, he doesn’t suppose AMD can tackle Nvidia relating to AI efficiency. “In AI, Nvidia raised the bar once more, and I don’t see any competition who can transparent that hurdle,” Freund mentioned.

For AI coaching, recommender machine fashions like DLRM have huge tables representing billions of customers and billions of goods. A100 80GB delivers an as much as three times speedup so companies can temporarily retrain those fashions to ship extremely correct suggestions.

The A100 80GB additionally allows coaching of the biggest fashions with extra parameters becoming inside a unmarried HGX-powered server, akin to GPT-2, a herbal language processing style with superhuman generative textual content capacity.

This gets rid of the desire for information or style parallel architectures that may be time-consuming to enforce and sluggish to run throughout more than one nodes, Nvidia mentioned.

With its multi-instance GPU (MIG) generation, A100 may also be partitioned into as much as seven GPU cases, each and every with 10GB of reminiscence. This gives safe hardware isolation and maximizes GPU usage for plenty of smaller workloads.

The A100 80GB can ship acceleration for medical packages, akin to climate forecasting and quantum chemistry. Quantum Coffee, a fabrics simulation, completed throughput positive factors of just about double with a unmarried node of A100 80GB.

New techniques for the GPU

Above: Nvidia’s DGX Station A100 supercomputer.

Symbol Credit score: Nvidia

In the meantime, Nvidia introduced the second one technology of its AI computing machine, dubbed Nvidia DGX Station A100, which the corporate calls a datacenter in a field. The field delivers 2.five petaflops of AI efficiency, with 4 A100 Tensor Core GPUs. All instructed, it has as much as 320GB of GPU reminiscence.

Nvidia VP Charlie Boyle mentioned in a press briefing that the machine supplies as much as 28 other GPU cases to run parallel jobs. “This is sort of a supercomputer below your table,” Boyle mentioned.

Shoppers the usage of the DGX Station platform lengthen throughout schooling, monetary products and services, govt, well being care, and retail. They come with BMW Crew, Germany’s DFKI AI analysis middle, Lockheed Martin, NTT Docomo, and the Pacific Northwest Nationwide Laboratory. The Nvidia DGX Station A100 and Nvidia DGX A100 640GB techniques will likely be to be had this quarter.

Mellanox networking

Above: Mellanox’s tech will allow quicker networking for the entirety from supercomputers to self-driving automobiles.

Symbol Credit score: Nvidia/Mellanox

Finally, Nvidia introduced Mellanox 400G Infiniband networking for exascale AI supercomputers. It’s the 7th technology of Mellanox InfiniBand generation, with information transferring at 400 gigabits in keeping with 2nd, in comparison to the primary technology at 10 gigabits in keeping with 2nd. Nvidia purchased Mellanox for $6.eight billion in 2019.

The InfiniBand tech supplies throughput for networking of one.64 petabits in keeping with 2nd, or five instances upper than the final technology. Mellanox’s tech will allow quicker networking for the entirety from supercomputers to self-driving automobiles, Nvidia senior VP Gilad Shainer mentioned in a press briefing.


Best possible practices for a a hit AI Middle of Excellence:

A information for each CoEs and industry devices Get right of entry to right here


Leave a Reply

Your email address will not be published. Required fields are marked *