Nvidia dominated the AI arena in 2024, with shipments of its Hopper GPUs more than tripling to over two million among its 12 largest customers, according to estimates from Omdia.
But while Nvidia remains an AI infrastructure titan, it’s facing stiffer competition than ever from rival AMD. Among earlier adopters of its Instinct MI300 series GPUs, AMD is quickly gaining share.
Omdia estimates that Microsoft purchased approximately 581,000 GPUs in 2024, the largest of any cloud or hyperscale customer in the world. Of those, one in six was built by AMD.
At Meta – by far the most enthusiastic adopter of the barely year-old accelerators, according to Omdia’s findings – AMD accounted for 43 percent of GPU shipments at 173,000 versus Nvidia’s 224,000. Meanwhile, at Oracle, AMD accounted for 23 percent of the database giant’s 163,000 GPU shipments.
Nvidia remained the dominant supplier of AI hardware in 2024. Credit: Omdia – click to enlarge
Despite growing share among key customers like Microsoft and Meta, AMD’s share of the broader GPU market remains comparatively small next to Nvidia.
Omdia’s estimates tracked MI300X shipments across four vendors – Microsoft, Meta, Oracle, and GPU bit barn TensorWave – which totaled 327,000.
AMD’s MI300X shipments remained a fraction of Nvidia’s in 2024. Credit: Omdia – click to enlarge
AMD’s ramp is no less notable as its MI300-series accelerators have only been on the market for a year now. Prior to that, AMD’s GPUs were predominantly used in more traditional high-performance computing applications like Oak Ridge National Laboratory’s (ORNL) 1.35 exaFLOPS Frontier supercomputer.
“They managed to prove the effectiveness of the GPUs through the HPC scene last year, and I think that helped,” Vladimir Galabov, research director for cloud and datacenter at Omdia, told The Register. “I do think there was a thirst for an Nvidia alternative.”
…
The AI market is bigger than hardware
Nvidia’s monumental revenue gains over the past two years have understandably shone a spotlight on the infrastructure behind AI, but it’s only one piece of a much larger puzzle.
Omdia expects Nvidia to struggle over the next year to grow its share of the AI server market as AMD, Intel, and the cloud service providers push alternative hardware and services.
“If we’ve learned anything from Intel, once you’ve reached 90-plus percent share, it’s impossible to continue to grow. People will immediately look for an alternative,” Galabov said.
However, instead of fighting for share in an increasingly competitive market, Galabov suspects that Nvidia will instead focus on expanding the total addressable market by making the technology more accessible.
The introduction of Nvidia Inference Microservices (NIMs), containerized models designed to function like puzzle pieces for building complex AI systems, are just one example of this pivot.
“It’s the Steve Jobs strategy. What made the smartphone successful is the App Store. Because it makes the technology easy to consume,” Galabov said of NIMs. “It’s the same with AI; make an app store and people will download the app and they’ll use it.”