Investors reacted cautiously to the projections. AMD shares declined about 3% in after-hours trading on the day of the presentation, though the stock remains up nearly 100% since the beginning of 2025.
The Santa Clara–based chip maker is positioning itself as the primary alternative to Nvidia for companies building large-scale AI infrastructure. Organizations across cloud computing, social media and research have committed hundreds of billions of dollars to graphics processing units (GPUs) that train and run machine-learning models such as OpenAI’s ChatGPT. Rising costs and limited supply have encouraged buyers to seek second-source suppliers, a role AMD aims to fulfill as the only other large GPU developer.
In October, AMD disclosed a multiyear agreement to provide OpenAI with Instinct-branded AI accelerators. The arrangement begins with enough hardware in 2026 to consume roughly one gigawatt of power and could eventually give OpenAI a 10% equity stake in AMD. The chip maker also highlighted long-term supply commitments from Oracle and Meta Platforms.
Next-generation Instinct MI400X GPUs are scheduled to ship in 2026. AMD said those parts will be offered in rack-scale systems holding 72 interconnected chips, eliminating many of the networking and management hurdles associated with assembling large clusters manually. Nvidia has marketed rack-scale options across three product generations; AMD views parity in this area as essential for serving customers training the largest AI models.
During the presentation, Su updated AMD’s total addressable market estimate for data-center AI silicon and systems to approximately $1 trillion by 2030, representing compound annual growth of about 40%. The forecast doubles an earlier company estimate that pegged the opportunity at $500 billion by 2028. Unlike the prior figure, the new projection includes central processing units (CPUs) in addition to GPUs. AMD recorded $5 billion in AI chip revenue in fiscal 2024.

Imagem: Internet
While AI-centric products drew the bulk of attention, AMD emphasized that established lines continue to expand. Epyc server CPUs remain the company’s largest revenue contributor and compete directly with Intel’s Xeon processors and several Arm-based alternatives. AMD also supplies custom chips for game consoles, networking components and a range of embedded devices.
Management stated that performance across these traditional segments is “firing on all cylinders,” reinforcing confidence in the broader business. The combination of existing franchises and emerging AI demand underpins the 35% overall revenue growth target.
Industry observers note that sustained progress will require AMD to secure manufacturing capacity, advance chip-to-chip interconnect technology and refine software tools that allow developers to migrate workloads from Nvidia’s CUDA ecosystem. According to a recent analysis by Reuters, Nvidia’s entrenched position stems in part from a mature software stack that accelerates adoption of its hardware.
Even so, the scale of projected spending suggests room for multiple suppliers. AMD estimates that customers will deploy AI hardware at an unprecedented rate through 2030, reinforcing its belief that demand will remain “insatiable.” The company’s leadership argued that new product introductions, expanded manufacturing partnerships and deepening relationships with major cloud providers position AMD to capture a meaningful share of that growth.
Crédito da imagem: David Paul Morris / Bloomberg