NVIDIA and Microsoft launch ultra-large-scale GPU accelerator

Display chip maker NVIDIA announced on the 9th that it will work with software maker Microsoft to develop a new hyperscale GPU accelerator blueprint "HGX-1" designed to drive artificial intelligence (AI) cloud computing. The new HGX-1 architecture is designed to meet the needs of AI cloud computing needs. Its range includes applications such as autonomous driving, personal medical care, speech recognition beyond humans, data and image analysis, and molecular simulation.

NVIDIA said that the new HGX-1 is a very large-scale GPU accelerator combined with Microsoft Project Olympus open source design, providing a fast and flexible approach to ultra-large-scale data centers in artificial intelligence. Moreover, HGX-1 is suitable for artificial intelligence operations built on cloud computing. Like the role of ATX (Advanced Technology eXtended) introduced for PC motherboards more than 20 years ago, HGX-1 sets an industry standard. Can be quickly and effectively adopted to help achieve rapidly growing market demand.

NVIDIA co-founder and CEO Huang Renxun said that AI is a new computing model and therefore requires new architectural support. The role of the HGX-1 ultra-large-scale GPU accelerator in AI cloud computing is like the ATX standard that made PCs popular today. HGX-1 will enable cloud service providers to more easily meet the surge in AI computing needs through NVIDIA GPUs.

Kushagra Vaid, General Manager of Microsoft and Azure Hardware Infrastructure Division Engineer, also pointed out that the HGX-1 AI Accelerator will provide extreme performance expansion to meet the needs of fast-growing machine learning workloads, while its special design enables today's global Data centers everywhere can be easily adopted. For thousands of companies and startups around the world that have invested in AI and adopted AI-based methods, the HGX-1 architecture delivers unprecedented configuration management and performance in the cloud.

According to the plan, the HGX-1 will be equipped with 8 NVIDIA Tesla P100 graphics cards in each host in the future. It features an innovative conversion design based on NVIDIA NVLink interconnect technology and PCle standards, allowing the CPU to connect to multiple GPUs at will. This allows cloud service providers that standardize on the HGX-1 infrastructure to provide customers with a variety of CPU and GPU machine configurations.

Because cloud operations are more diverse and complex than ever. Therefore, under the HGX-1 architecture, AI training, inference, and high-performance computing (HPC) operations can be linked to different numbers of GPUs via the CPU to operate optimally in different system configurations. Regardless of the workload, the HGX-1's highly modular design allows it to operate at peak performance. At the same time, the HGX-1 provides up to 100 times faster deep learning than traditional CPU-based servers, while only one-fifth of the cost of performing AI training and one-tenth of the cost of AI inference. In the future, HGX-1 will provide a fast and simple way for ultra-large data centers to use in the AI ​​field through the high degree of flexibility in working with data centers around the world.

Gold Finger PCB

Gold Finger Pcb,Flexible Circuit Board Gold Finger,Circuit Board Gold Finger Plating,Gold Finger Circuit Board

Shenzheng Weifu Circuit Technology Co.Ld , https://www.wfcircuit.com

This entry was posted in on