I don’t think Intel misjudged the space at all. I think they misjudged their capabilities. Intel’s space has been big iron for decades. They bring in the most revenue and the biggest margins. Extra capacity flowed downhill to the next highest margins in order: small servers, workstations, gaming computers, desktops/mobile, etc. There was an inflection point in there where desktops went from higher margins to mobile becoming higher margins. AMD was drastically superior on laptops because their architecture was designed for better yields and flexibility whereas Intel was committed to servers, so that helped AMD, but I didn’t see it coming, and I don’t think it hurt Intel very much as the combination of ever larger server CPUs and processes no longer being the best.
As for GPUs, there were two GPU manufacturers worth mentioning: Nvidia (the best and biggest) and ATI (plucky little competitor, like AMD to Intel). AMD paid up (at the time I thought overpaid) for ATI, and Intel tried to develop its own graphics because that was their corporate mentality and it had yet to fail them. But eventually AMD figured out how to combine ATI + x86 to create APUs, which were low margin but cheap and relatively high volume, which fit AMD’s foundries at the time. So that provided a backbone of revenue as AMD owned all the gaming consoles and eventually that market grew as tablets and Chromebooks came along.
There were other things going on at the physics level that others here can explain better than I can, but basically the Zen design spread out hotspots. There’s be a hot chiplet here and a hot data transfer line over to a hot controller unit there, and more chiplets doing the same, so since Zen came along, you had about the same amount of heat generated but over a much larger surface area, so overall they ran cooler. Cooler allows overclocking which makes things faster. And with MANY small chiplets per wafer, yields (the number of good die candidates per wafer burned) rose significantly. Say there are 64 chiplets on a wafer with ten defects. The worst case is you get 54 good chiplets, a very high yield. If some of those defects on on the same chiplet or on the interstitial areas between chiplets, yields get even better. Whereas Intel’s server centric process was generating ever denser (hotter) chips, and far fewer CPUs per wafers, so heating kept on being more of a problem and yields dropped. Intel was still making money hand over fist because nobody could compete well with them in the highest revenue server sector.
Then Zen servers were developed. If you make a lower chiplet Zen device, you drop it straight into a smaller form factor product and ship it. But for servers, you probably need to have it running for two years or so before anyone buying a mission critical server would consider buying from you. AMD grew revenues on high volume small margin small revenue products, and the added capacity that APUs gave laptops made laptops more profitable. Why would anybody buy a desktop for an employee when they could get the same work on a laptop that employees could take home or on business trips?
Those revenues put AMD into the black while AMD’s servers will being validated. ARM also put new pressures on Intel’s servers, because all those years of price hikes meant nobody really was completely loyal to Intel. NV was the first to figure out how to really make GPUs improve servers, as did AMD. Intel’s graphics weakness meant they had to buy other folks’ GPUs to help speed up their servers’ performance. The process failure was hard to envision from Intel’s long history and from what I read self inflicted.
What advantages does Intel have these days? They are sitting on a ton of cash, but also likely to face shareholder lawsuits. They do have backwards compatibility that IMO is more likely to work than with servers from other manufacturers for the customers that want to expand their data center 3with new servers that can talk to the old servers, but that market is shrinking with every non-Intel server purchased. Every time a better total cost of operation (ICO) server comes along it makes more sense to slash power usage and just punt the old server and put in something that will save money every month.
I don’t think Gelsinger was swinging wildly. He was process focused. But so is TSMC, and with Intel no longer close to competitive (which is the ugly truth nobody wanted to discuss when Gelsinger came back) they have to do MUCH better than TSMC to catch up. I think Intel needs to give up on its Foundry business, close the fabs, lay off the employees, fix the process gap (from what I read here, they’ve already fixed much of the architecture gap) and hope they have enough savings to make that happen. That will cut the losses substantially, leading to a much leaner company. But that mandate has to come from a board of directors that doesn’t want to lose their Intel money or especially their Intel stock value, a board that allowed Intel to deteriorate so far and wants to protect their resumes by not admitting how bad things have gotten. Now that TSMC hates Intel, I see few paths to viability that do not start with fixing Intel’s process issues, which is exactly what Gelsinger was trying to build.
Until Intel admits how bad things have become and starts a complete turnaround, things are only going to get worse. I believe Intel has enough smart people trying to protect their retirement portfolios based on their Intel stock options. Good ideas will come, and maybe already have come. But I lack confidence in Intel’s board to recognize any company saving idea as worth following if it in any way implies that maybe the board shares a large chunk of the blame.
Fool on!
Roleplayer