Nvidia CEO highlights chips for the historic wave of generative AI at Computex

Connect with top gaming leaders in Los Angeles at GamesBeat Summit 2023 this May 22-23. Register here.

Nvidia CEO Jensen Huang announced a variety of platforms that companies will be able to use to ride a historic wave of generative AI that is transforming industries across the globe.

Huang made the remarks at a keynote speech at the Computex trade show in Taiwan. The speech was his first live keynote delivered in person since the pandemic. He also announced that the Grace Hopper Superchips are now in full production.

He described accelerated computing services, software and systems that are enabling new business models and making current ones more efficient. At Computex, Huang emphasized the trend of industrial digitalization, which enables companies to create digital twins of their factories to test the concepts before they build the factories in real life. Sensors in real-world factories give feedback to the digital twins to improve the overall design.

The Grace Hopper Superchips combine in a single module the energy-efficient Nvidia Grace CPU with a high-performance Nvidia H100 Tensor Core GPU. For enterprises, Huang unveiled DGX GH200, a large-memory AI supercomputer. It uses Nvidia NVLink to combine up to 256 NVIDIA Grace Hopper Superchips into a single data-center-sized GPU.


GamesBeat Summit 2023

Join the GamesBeat community for our virtual day and on-demand content! You’ll hear from the brightest minds within the gaming industry to share their updates on the latest developments.

Register Here

The DGX GH200 packs an exaflop of performance and 144 terabytes of shared memory, nearly 500 times more than in a single Nvidia DGX A100 320GB system. That lets developers build large language models for generative AI chatbots, complex algorithms for recommender systems, and graph neural networks used for fraud detection and data analytics.

Google Cloud, Meta and Microsoft are among the first expected to gain access to the DGX GH200 to explore its capabilities for generative AI workloads.

“DGX GH200 AI supercomputers integrate Nvidia’s most advanced accelerated computing and networking technologies to expand the frontier of AI,” Huang said.

Nvidia Helios

Nvidia is building its own massive AI supercomputer, Nvidia Helios, coming online this year. It will use four DGX GH200 systems linked with Nvidia Quantum-2 InfiniBand networking with up to 400Gb/s bandwidth to supercharge data throughput for training large AI models.

The DGX GH200 is part of hundreds of systems announced at the event using Nvidia’s latest GPUs and CPUs. Together, they’re bringing generative AI and accelerated computing to millions of users, Nvidia said.

To fit the needs of data centers of every size, Huang announced Nvidia MGX, a modular reference architecture for creating accelerated servers. System makers will use it to quickly and cost-effectively build more than a hundred different server configurations to suit a wide range of AI, HPC and Nvidia Omniverse applications.

MGX lets manufacturers build CPU and accelerated servers using a common architecture and modular components.

Zooming out to the big picture, Huang announced more than 400 system configurations are coming to market with Nvidia Hopper, Grace and Ada Lovelace architectures. They aim to tackle the most complex
challenges in AI, data science and high-performance computing.

Grace Hopper helps 5G

Nvidia Spectrum Ethernet switches

Huang also showed how Nvidia is helping reinvent 5G with Grace Hopper. He announced Nvidia is working with a telecom giant to build a distributed network of data centers in Japan. It will deliver 5G services and generative AI applications on a common cloud platform.

The data centers will use Grace Hopper and Nvidia BlueField-3 DPUs in modular MGX systems as well as Nvidia Spectrum Ethernet switches to deliver the highly precise timing the 5G protocol requires. The platform will reduce costs by increasing spectral efficiency while reducing energy consumption.

The systems will help explore applications in autonomous driving, AI factories, augmented and virtual reality, computer vision and digital twins. Future uses could even include 3D video conferencing and
holographic communications.

Powering cloud networks

Nvidia Spectrum X

Separately, Huang unveiled Nvidia Spectrum-X, a networking platform purpose-built to improve the performance and efficiency of Ethernet-based AI clouds. It combines Spectrum-4 Ethernet switches with BlueField-3 DPUs and software to deliver 1.7x gains in AI performance and power efficiency.

Nvidia Spectrum-X, Spectrum-4 switches and BlueField-3 DPUs are available now from system makers including Dell Technologies, Lenovo and Supermicro.

Huang announced NVIDIA is building Israel-1, a generative AI supercomputer in its Israeli data center. It will be built with Dell PowerEdge servers, the Nvidia HGX H100 supercomputing platform and the Spectrum-X platform with BlueField-3 DPUs and Spectrum-4 switches.

Huang described to several thousand attendees in Taipei two new supercomputers being built in Taiwan.
Taiwan’s National Center for High-Performance Computing will be the home of Taiwania 4. Built by Asus, it will come online next year.

Thanks to its Arm-based Grace CPUs linked to an Nvidia Quantum-2 InfiniBand network, Taiwania 4 will rank among the most energy-efficient supercomputers in Asia. That’s appropriate for its mission of tackling complex issues such as climate change. Taiwania 4 will use Nvidia Omniverse. It marks the third announcement of an Nvidia CPU-only supercomputer following news of systems for the Great Western 4 Alliance, in the U.K., and the Barcelona Supercomputing Center in Spain.

A second new supercomputer, Taipei-1, owned and operated by Nvidia, will feature 64 DGX H100 AI supercomputers, 64 Nvidia OVX systems and Nvidia networking to accelerate local R&D when it comes online later this year.

Accelerating Gen AI on Windows

Huang described how Nvidia and Microsoft are collaborating to drive innovation for Windows PCs in the generative AI era.

New and enhanced tools, frameworks and drivers are making it easier for PC developers to develop and deploy AI. For example, the Microsoft Olive toolchain for optimizing and deploying GPU-accelerated AI models and new graphics drivers will boost DirectML performance on Windows PCs with Nvidia GPUs.

The collaboration will enhance and extend an installed base of 100 million PCs sporting RTX GPUs with Tensor Cores that boost the performance of more than 400 AI-accelerated Windows apps and games.

GamesBeat’s creed when covering the game industry is “where passion meets business.” What does this mean? We want to tell you how the news matters to you — not just as a decision-maker at a game studio, but also as a fan of games. Whether you read our articles, listen to our podcasts, or watch our videos, GamesBeat will help you learn about the industry and enjoy engaging with it. Discover our Briefings.