Several new chips and technologies were introduced on Tuesday by Nvidia Corp., which claims they will speed up the processing of increasingly complex artificial intelligence algorithms. The announcement ratchets up competition with rival chipmakers vying for lucrative data centre business.
Originally developed to aid in the acceleration and improvement of video quality in the gaming market, Nvidia’s graphic processing units (GPUs) have evolved into the dominant chips used by businesses for Artificial Intelligence (AI) workloads, according to the company. As stated by the company, the latest GPU, dubbed the H100, can help reduce computing times for certain tasks involving training artificial intelligence models from weeks to days, rather than weeks to months.
In an online session of Nvidia’s AI Developer Conference, the company made the announcements. As Nvidia CEO Jensen Huang put it in a statement, “data centres are evolving into artificial intelligence factories, processing and refining mountains of data to generate intelligence.” The company refers to its H100 chip as the “engine” of artificial intelligence infrastructure, which it calls the “engine” of AI infrastructure.
Image credit:- TechGig.com
A variety of applications, ranging from video recommendation to drug discovery, have been implemented by businesses, and the technology is increasingly becoming a critical tool for the success of these enterprises.
It is expected to be available in the third quarter of this year and will be manufactured using a cutting-edge four nanometer process with 80 billion transistors. Aside from that, the H100 will be utilised in the construction of Nvidia’s upcoming “Eos” supercomputer, which Nvidia claims will be the world’s fastest artificial intelligence system when it launches later this year.
Earlier this year, Meta, the parent company of Facebook, announced that it would build the world’s fastest artificial intelligence supercomputer this year, with a processing power of nearly 5 exaflops. As of Tuesday, Nvidia’s supercomputer will have a peak performance of more than 18 exaflops, according to the company.
It is defined as the ability to perform one quintillion – or one billion trillion – calculations per second, which is measured in exaflop performance.
A new processor chip (CPU) based on Arm technology, called the Grace CPU Superchip, was introduced at the same time as the graphics processing unit (GPU). Because of regulatory obstacles, Nvidia’s acquisition of Arm Ltd fell through last month, and this is the first new Arm-based chip announced by the company since then. In the first half of next year, the Grace CPU Superchip, which connects two CPU chips, will be available for purchase. It is optimised for artificial intelligence and other computationally intensive tasks.
More and more companies are connecting chips together with the help of technology that allows for faster data transfer between the chips. Apple Inc. unveiled its M1 Ultra chip earlier this month, which connects two M1 Max chips to form a single chip with improved performance. [nL2N2VB1DI] Following Nvidia’s announcement on Tuesday, the two CPU chips were linked together using the company’s NVLink-C2C technology, which was also unveiled on Tuesday.
Published By:Â JAINAM SHETH
Edited By : KRITIKA KASHYAP