Nvidia’s strategy of partnering across the industry has accelerated as demand for generative-AI infrastructure grows. The company already collaborates with Intel Corp. and Advanced Micro Devices Inc. on GPU-centric servers that use x86 processors. In September, Nvidia committed up to US$5 billion to Intel’s foundry division, a deal that also ensures NVLink compatibility for future Intel CPUs.
Rising Interest in Custom Arm Silicon
Major cloud platforms—Microsoft Azure, Amazon Web Services and Google Cloud—have been designing or deploying Arm-based CPUs to gain greater control over performance tuning and energy efficiency. Arm does not manufacture chips itself; instead, it licenses its instruction-set architecture and offers reference designs that partners can customize. The new NVLink Fusion protocol will be included in forthcoming Neoverse cores, enabling those custom chips to connect directly to Nvidia accelerators without additional interface logic.
The CPU historically served as the main processor in a server, but surging adoption of large language models has shifted the center of gravity to AI accelerators. Industry analysts note that GPUs now account for the bulk of capital spending at many data-center operators, driving interest in specialized interconnects that minimize latency and maximize throughput between chips.
Background on the Arm-Nvidia Relationship
Nvidia attempted to acquire Arm for US$40 billion in 2020, a plan that collapsed in 2022 after regulators in the United States and United Kingdom objected on antitrust grounds. Nvidia retained a small minority stake in Arm, which is majority-owned by Japan’s SoftBank Group. SoftBank sold its remaining shares in Nvidia earlier this month and is reportedly funding OpenAI’s large-scale “Stargate” data-center project, which will combine Arm-based processors with GPUs from Nvidia and AMD.
Despite the failed takeover, Monday’s announcement shows the two companies maintaining technical cooperation. By aligning on NVLink Fusion, they are offering cloud builders a standard pathway to mix custom Arm CPUs with Nvidia’s H100, H200 and upcoming B100 series GPUs inside a single server chassis.
Implications for Hyperscalers
Hyperscale operators typically optimize their infrastructure around workload-specific demands. Integrating Neoverse CPUs directly with Nvidia GPUs allows them to:

Imagem: Internet
- Reduce the number of discrete components and cables in AI servers.
- Lower latency between the host processor and accelerators.
- Simplify board design and cooling requirements.
- Maintain flexibility to tailor CPU core counts, memory subsystems and power envelopes.
Arm indicated that the first chips featuring the enhanced interface are already under development, though specific launch dates were not disclosed. Nvidia did not announce new GPU products in conjunction with the news but emphasized that existing NVLink-capable accelerators will be supported.
Competitive Landscape
NVLink competes with other high-speed chip-to-chip standards such as AMD’s Infinity Fabric and the open Compute Express Link (CXL) consortium. By integrating NVLink at the CPU design stage, Arm licensees can avoid external bridge chips, potentially giving Nvidia-based systems an edge in power efficiency and bandwidth.
Intel, the leading supplier of data-center CPUs, is pursuing its own roadmap to link Xeon processors with both its in-house Gaudi accelerators and Nvidia GPUs. AMD, meanwhile, continues to promote EPYC processors alongside its Instinct GPU line, using Infinity Fabric to tie components together.
For Arm, broader NVLink adoption may strengthen its position in data centers just months after its record-setting initial public offering. The company reported in its latest filings that Neoverse shipments more than doubled year over year, reflecting demand in networking equipment and cloud servers.
Industry observers will be watching how quickly cloud providers migrate from x86 to Arm in AI workloads and whether NVLink Fusion becomes a de facto standard for GPU connectivity outside Nvidia’s own CPU offerings.
Crédito da imagem: Reuters