Arm’s Neoverse CPUs to Connect Directly with Nvidia GPUs Under Expanded NVLink Program - Trance Living

Arm’s Neoverse CPUs to Connect Directly with Nvidia GPUs Under Expanded NVLink Program

Arm Ltd. announced on Monday that future servers built with its Neoverse central processing units will be able to link directly to Nvidia Corp.’s graphics processing units through the NVLink Fusion interconnect, giving hyperscale cloud providers a simpler path to combine custom Arm-based chips with Nvidia’s dominant AI accelerators.

The initiative introduces a new protocol inside Neoverse designs that allows data to move between the CPU and multiple GPUs without the bottlenecks of traditional PCI Express connections. Up to eight Nvidia GPUs can attach to a single CPU in a typical generative-AI server, and the companies said the tighter coupling is aimed at large cloud operators that already favor in-house hardware configurations to control performance, cost and supply chains.

Broader Access to NVLink

By opening NVLink to Arm licensees, Nvidia is extending a technology that was previously reserved for systems built entirely around its own processors. The move follows the company’s earlier decision to pair its Grace Blackwell platform—an Arm-based CPU packaged with multiple GPUs—in a single board. While Grace Blackwell positions Nvidia as a full-stack chip supplier, Monday’s agreement signals a more flexible stance that lets customers choose a non-Nvidia CPU while retaining the high-bandwidth GPU interconnect.

Nvidia’s strategy of partnering across the industry has accelerated as demand for generative-AI infrastructure grows. The company already collaborates with Intel Corp. and Advanced Micro Devices Inc. on GPU-centric servers that use x86 processors. In September, Nvidia committed up to US$5 billion to Intel’s foundry division, a deal that also ensures NVLink compatibility for future Intel CPUs.

Rising Interest in Custom Arm Silicon

Major cloud platforms—Microsoft Azure, Amazon Web Services and Google Cloud—have been designing or deploying Arm-based CPUs to gain greater control over performance tuning and energy efficiency. Arm does not manufacture chips itself; instead, it licenses its instruction-set architecture and offers reference designs that partners can customize. The new NVLink Fusion protocol will be included in forthcoming Neoverse cores, enabling those custom chips to connect directly to Nvidia accelerators without additional interface logic.

The CPU historically served as the main processor in a server, but surging adoption of large language models has shifted the center of gravity to AI accelerators. Industry analysts note that GPUs now account for the bulk of capital spending at many data-center operators, driving interest in specialized interconnects that minimize latency and maximize throughput between chips.

Background on the Arm-Nvidia Relationship

Nvidia attempted to acquire Arm for US$40 billion in 2020, a plan that collapsed in 2022 after regulators in the United States and United Kingdom objected on antitrust grounds. Nvidia retained a small minority stake in Arm, which is majority-owned by Japan’s SoftBank Group. SoftBank sold its remaining shares in Nvidia earlier this month and is reportedly funding OpenAI’s large-scale “Stargate” data-center project, which will combine Arm-based processors with GPUs from Nvidia and AMD.

Despite the failed takeover, Monday’s announcement shows the two companies maintaining technical cooperation. By aligning on NVLink Fusion, they are offering cloud builders a standard pathway to mix custom Arm CPUs with Nvidia’s H100, H200 and upcoming B100 series GPUs inside a single server chassis.

Implications for Hyperscalers

Hyperscale operators typically optimize their infrastructure around workload-specific demands. Integrating Neoverse CPUs directly with Nvidia GPUs allows them to:

Arm’s Neoverse CPUs to Connect Directly with Nvidia GPUs Under Expanded NVLink Program - Imagem do artigo original

Imagem: Internet

  • Reduce the number of discrete components and cables in AI servers.
  • Lower latency between the host processor and accelerators.
  • Simplify board design and cooling requirements.
  • Maintain flexibility to tailor CPU core counts, memory subsystems and power envelopes.

Arm indicated that the first chips featuring the enhanced interface are already under development, though specific launch dates were not disclosed. Nvidia did not announce new GPU products in conjunction with the news but emphasized that existing NVLink-capable accelerators will be supported.

Competitive Landscape

NVLink competes with other high-speed chip-to-chip standards such as AMD’s Infinity Fabric and the open Compute Express Link (CXL) consortium. By integrating NVLink at the CPU design stage, Arm licensees can avoid external bridge chips, potentially giving Nvidia-based systems an edge in power efficiency and bandwidth.

Intel, the leading supplier of data-center CPUs, is pursuing its own roadmap to link Xeon processors with both its in-house Gaudi accelerators and Nvidia GPUs. AMD, meanwhile, continues to promote EPYC processors alongside its Instinct GPU line, using Infinity Fabric to tie components together.

For Arm, broader NVLink adoption may strengthen its position in data centers just months after its record-setting initial public offering. The company reported in its latest filings that Neoverse shipments more than doubled year over year, reflecting demand in networking equipment and cloud servers.

Industry observers will be watching how quickly cloud providers migrate from x86 to Arm in AI workloads and whether NVLink Fusion becomes a de facto standard for GPU connectivity outside Nvidia’s own CPU offerings.

Crédito da imagem: Reuters

You Are Here: