In the GPU game, not many can stand up to Nvidia. The company has a clear monopoly, and rolls out chips that leaves the competition flustered every time. And whenever it catches up (looking at you AMD), Nvidia just takes another jump. And this time, Nvidia took a jump for the moon. In a live stream from NVIDIA founder and CEO Jensen Huang, he announced the new GPU architecture from the company, Ampere, and the first result from it, the mighty A100.
Huang said that the A100 is “the greatest generational performance leap of NVIDIA’s eight generations of GPUs, is also built for data analytics, scientific computing and cloud graphics.” He also said that it is in production, and ready to be shipped to customers worldwide.
Companies like Alibaba Cloud, Amazon Web Services, Baidu Cloud, Cisco, Dell Technologies, Google Cloud, Hewlett Packard Enterprise, Microsoft Azure and Oracle have already adopted to the new master, and are incorporating the new GPU at the moment.
Huang said that the new GPU offers a mind boggling 54 billion transistors (which are basically the one off switches of an electronic device, and constitute to the 0s and 1s of a computer code). This makes it the world’s largest 7nm processor, and by a long shit dare I mention.
Other features include Tensor Cores with TF32 (a new math format that accelerates single-precision AI training out of the box), Structural sparsity acceleration (a new efficiency technique harnessing the inherently sparse nature of AI math for higher performance), Multi-instance GPU (allowing a single A100 to be partitioned into as many as seven independent GPUs, each with its own resources) and Third-generation NVLink technology, (doubling high-speed connectivity between GPUs, allowing A100 servers to act as one giant GPU).
All of these features together result in 6 times more performance than Nvidia’s previous generation Volta architecture for training and 7x higher performance for inference.
The company is also boasting 5 teraflops of processing power on NVIDIA DGX A100, a third generation variant of its NVIDIA DGX AI system based on NVIDIA A100.
This allows a single server to either “scale up” to race through computationally intensive tasks such as AI training, or “scale out,” for AI deployment, or inference, Huang said.
However, if that does not tickle your fancy, maybe this will. Huang also announced the next-generation DGX SuperPOD, powered by 140 DGX A100 systems and Mellanox networking technology. This system provides an astronomical 700 petaflops of processing power, putting it in equivalence to one of the 20 fastest computers in the world.
He did not stop there, as he explained the company’s plans to make the world’s fastest AI supercomputer by adding 2.8 exaflops of AI computing power to its SATURNV internal supercomputer, for a total of 4.6 exaflops of processing (that’s 10 to the 18th power).