Amazing no one else has commented on this. I had a nice day off it
I thought it was tremendous validation for AMD, a sign that NVDA no longer owns the whole ball of wax. Broadcom may be less happy about it since it suggests that custom silicon may be less in demand for AI/ML.
Semiconductor maker AMD will supply its chips to artificial intelligence company OpenAI as part of an agreement to team up on building AI infrastructure, the companies said Monday.
OpenAI will also get the option to buy as much as a 10% stake in AMD, according to a joint statement announcing the deal. It’s the latest deal for the ChatGPT maker as it races to beef up its AI computing resources.
Under the terms of the deal, OpenAI will buy the latest version of the company’s high performance graphics chips, the Instinct MI450, which is expected to debut next year.
The agreement calls for supplying 6 gigawatts of computing power for OpenAI’s “next generation” AI infrastructure, with the first batch of chips worth 1 gigawatt to be deployed in the second half of 2026.
AMD also issued OpenAI with a warrant allowing the AI company to buy up to 160 million shares of AMD’s common stock. That amounts to about 10% of the chipmaker based on AMD’s 1.6 billion outstanding shares. The warrant will vest based on two milestones tied to the amount of computing power deployed, as well as unspecified “share-price targets.”
AMD of course surged to 226 at the open but pulled back a lot over the day – 204 at the close.
But the talking heads worry that a big AI correction is coming. How long can explosive growth continue? And can all AI companies make money at it? What happens when they fail?
In our discussion group someone noted equipment will be obsolete after four to five years? Will out dated data centers get refurbed? Will assets be saleable or salvaged as junk?
I’d think that individual cards or servers or racks would be routinely swapped out for newer and faster hardware steadily over time, much like failing servers are currently being swapped out daily in Google server farms and shipping containers.
I think 4 to 5 years is a more historical number. Moores law has been slowing and we are now seeing longer times between significant improvement in silicon technology. I am not sure what the depreciation plan is for the massive deployments being done, but I suspect a ten year life is reasonable.
it’s a strange market in terms of server life cycle. last I saw, the big cloud providers had switched to something like a 6-year cycle for ordinary CPU-centric hardware, in part, because nothing Intel was shipping was actually better enough or even different enough to be worth an upgrade.
But I could believe that AI hardware is going to have shorter lifetimes than that. That’s such a fast moving Target right now. maybe some of this hardware will transition into a roll of doing inference as opposed to training as a way of squeezing more productive life out of it.
There is a big software component with AI, that could require new hardware to function. That said, I am not sure they can switch to any shorter word widths:-)
As a related Item, Intel switched to an 8 year depreciation cycle for the silicon fabrication equipment, so they expect to use each node much longer than they did historically. TSMC has been cycling much faster than Intel on performance/watt improvements, but Intel is now making some strides (a little late as AMD now has a big chunk of the market). It used to be each node was 2 years apart and a factor of 2 improvement. Now they are 3 or 4 years and lucky to make a 25% improvement.
I think we have seen a typical transition in various computer systems. Search, data base software, browsers, pcs, etc. We expect maybe two suppliers to dominate and others to specialize in well defined sectors. We expect the same to happen in AI. Clearly OpenAI, ChatGpt expects to be a survivor, maybe a dominating leader.
Usually the leader continues to dominate by buying out competitors.
Can all AI players expect to prosper? Doubtful. Can loser assets be sold? Or will they be obsolete and scrapped?
Of course they will be. IIRC about half the cost of a data center is the building, the grid connection, power distribution, cooling, etc. Another significant chunk is all the network connections both external and internal.
AMD’s AI superhighway won’t open until late 2026
AMD is finally getting the green light to build a 6-gigawatt AI superhighway with OpenAI. But it takes time to deliver on a project this big.
The on-ramp opens in late 2026, when the first AMD MI450/Helios systems go live. Traffic really starts moving in 2027, when each new lane – Instinct chips and Helios racks – can add many billions of annual revenue once it’s humming along at full scale. AMD’s management expects the partnership to add more than $100 billion to the company’s sales over “the next few years,” including direct OpenAI shipments and a smattering of additional deals inspired by this week’s announcement.
The speed limit and mile markers are baked into a complex set of stock warrants: OpenAI unlocks vesting based on actual chip deployments and AMD share-price targets.
If all targets are met, OpenAI could acquire up to 160 million AMD shares in five years. That’s about 10% of the 1.6 billion shares AMD has in circulation today. The final tranche takes effect when AMD’s stock reaches $600 per share. That’s about triple the stock’s closing price on Oct. 6.
Whether OpenAI goes public or not, this structure gives the private company market-based performance incentives. It’s a great road for AMD and a genuine route around Nvidia’s pricey tollbooths – but it’s a long drive with scheduled maintenance, not an instant teleport.
Besides all the chip sales to OpenAI, it gives all other potential customers assurance that AMD software will be compatible; thus draining the cuda moat.
It seems to me that the long term gains will continue to be NVDA and AMD into the near term future. These two companies seem to have the dominant positions in this market with the gpu’s…doc
Exactly! For OpenAI to have come on board, there can be no doubt that the platform is Plenty Close Enough for anything any customer might want to do. Which is huge.
I was at an infrastructure conference about a year ago in SF and AMD was promising to get to where the software compatibility was ironclad – I believed them, because they have been trustworthy under Lisa, but it’s great to see them actually delivering.