Semi-OT: Intel punished after earnings

https://www.marketwatch.com/story/intel-stock-declines-as-ea…

Intel Corp. executives expect profit margins to remain pressured in the long term as the chip maker builds out manufacturing capacity, leading to a disappointing earnings guidance that dinged the company’s stock Wednesday afternoon.

Profit margins took center stage in Intel’s INTC, +1.35% earnings report for a second consecutive quarter, as the company’s earnings forecast fell below Wall Street expectations. Intel forecast GAAP gross margins of 49%, and non-GAAP margins of 52% for the first quarter, which is expected to translate into GAAP earnings of 70 cents a share and non-GAAP earnings of 80 cents a share.

Analysts on average expected adjusted first-quarter earnings of 86 cents a share on revenue of $17.61 billion, while Intel forecast revenue of about $18.3 billion. Shares declined about 3% in after-hours trading following the results, after closing up 1.4% in the regular session at $51.69.

Intel executives plan to spend freely to build out manufacturing capacity amid a semiconductor shortage, which has caught the ire of many analysts who are concerned the company’s aggressive capital buildout plans would weigh too heavily on profit margins. Last week, Intel confirmed it plans to invest more than $20 billion in a massive chip fabrication plant in Ohio, in addition to fabs in Arizona.

On the call, Intel Chief Executive Pat Gelsinger told analysts that the company has “a lot of catching up to do” in building out capacity, or “shells,” to address supply constraints.

“Boy, I lust for having a free shell today that we could be ramping into,” Gelsinger told analysts. “We simply have to build some more shell capacity and then we’ll be determining where is the best use and how to fill that as we start to build out.”

Chief Financial Officer David Zinsner said on his first earnings call with Intel that he feels comfortable with a 51% to 53% range in gross margin for the year. Longer term, Gelsinger said he expects margin recovery in the latter term of the five-year window he outlined last quarter.

So looks like Intel will be a buy someday but maybe not while they’re burning money on building out manufacturing. Street doesn’t like companies that spend money.

4 Likes

My takeaway was Q4 was a strong beat, but Q1 guidance was very weak… only $0.80 earnings expectation.

Alan

1 Like

Looks like AMD is punished after Intel earnings too.
Did somebody decide Intel’s “Big Spend” might negatively impact AMD?
–Alan

1 Like

It’s a puzzle trying to work out the reasons for AMD SP weakness. Unless it’s insider trading on possible poor results (which I find hard to believe) I would guess the reason to be China’s threats to invade Taiwan. TSMC produced some good results this month yet it’s off too from $149 2 weeks ago to approximately $118.70 today. Apple (TSMC’s biggest customer) doesn’t seem to be affected though.

Wish I knew what was going on, but results on Tuesday next may clear the gloom.

Sorry that should be TSMC was $145 roughly 2 weeks ago.

The advantage for AMD is not nearly clear cut as it was 6 months ago.

  • Intel to invest in several new fabs, with the US taxpayers likely paying for some of it
  • Intel willing to use TSMC to bridge the gap
  • Intel Alder Lake performance
  • Apple M1 Max performance
  • Nvidia continues to take share and revenue

One could argue that AMD still has a lead, and/or AMD’s upcoming products are awesome. But I see it much more difficult for AMD to continue with their 50% revenue growth. Stock prices are heavily influenced by growth rates. I expect it will be a couple years before AMD again sets an all time high.

1 Like

A point I have been trying to make is Intel’s goal is to catch up to TSMC, which is different than catching up to AMD’s use of TSMC technology. TSMC introduced N5 in late 2020, and AMD will not introduce a N5 product until late 2022; almost a two year gap.

I do agree all your points are valid.
Alan

3 Likes

A point I have been trying to make is Intel’s goal is to catch up to TSMC, which is different than catching up to AMD’s use of TSMC technology. TSMC introduced N5 in late 2020, and AMD will not introduce an N5 product until late 2022; almost a two-year gap.

Totally correct, but I wanted to add to it. Why doesn’t AMD use TSMC’s most recent process? AMD has been quite clear that they have experimental/prototype chips on all processes. These may be TSMC pilots or AMD designs. Based on this AMD decides which process node is best for a given new product. Thus AMD chose N6 for Rembrandt, while Zen 4 chiplets will use a variant on TSMC’s 5 nm process. I don’t expect it to be N4, but there are other N5 variants that AMD might be to using. It is possible that AMD will use N4 or even N3 for Zen 4c (Bergamo). Bergamo is expected to arrive well after Genoa, and that may be to allow the process to be used to mature.

The important point is that AMD chooses the technology to use, not on the basis of what the smallest die size will be, but a lot of other factors.

2 Likes

The important point is that AMD chooses the technology to use, not on the basis of what the smallest die size will be, but a lot of other factors.

I’m sure that there’s a mix of factors like “yield when we start using it”, “availability in volume,” “cost” (affected in part by demand for that particular process)… and what they can make off the parts that result. But I’m anxious about their ability to hold on to capacity on good enough nodes now that TSMC is the foundry that everyone thinks they must have.

But I’m anxious about their ability to hold on to capacity on good enough nodes now that TSMC is the foundry that everyone thinks they must have.

I ended up LoL even if it is not such a laughing matter. Many years ago (in the early 1960s) I figured out that there was a certain combination of talents that made the person who had them immune to suicide. The biggest part is getting sucked in by puzzles. A very good example comes from the world of contract bridge. If you are declarer when dummy comes down, what do you do first? If you try to figure out, now that you can see half the deck, how to make the contract? You are probably safe from (computer science job-related) suicide. If you count dummy’s points (and trumps) and try to decide where partner fouled up in the bidding? Why? Let me explain the imagined situation that had me laughing.

Digression you probably want to skip if you are not a programmer.

You show up for work and someone calls a meeting of all the IT development staff. Let’s say the meeting is called by the VP of R&D. The titles don’t matter, the situation does. In the meeting, the VP says that he just concluded a deal for fifty thousand N5 wafer starts at TSMC a year from today. I need you, people, to take our existing designs and compile them against TSMCs design rules. I’d like a rough report on progress by CoB the day after tomorrow.

I know of one case where this basic situation occurred, and it involved Intel. I happened to be there for job interviews and went out to lunch with the compiler development team. I was told that before my plane landed in Boston, they had already had a meeting with venture capital people. What had happened? There was an oversight in the i432 design. Doesn’t matter now who made the error, and since it was overlooking a necessary operation, anyone could have twigged to it in time to avert disaster.

The i432 chip was a capability machine. One hardware-defined node held a list of top-level capabilities. I won’t go into the mechanism for quickly matching a capability to the request in hand. The problem was, the instruction for adding a capability to a list didn’t work on the hardware-defined root capability. (It added capabilities at the front of the list. Oops!) The software developers had kludged it to treat the second-level capabilities as root capabilities. Only added 5 microseconds to the execution of each instruction. To anyone with a few brain cells to rub together, manufacturing (and demonstrating) that as the first version of the iapx432 was a seriously bad idea. Within a few weeks, Rational had been founded and, in addition to some hardware engineers, everyone I interviewed with–and told me not to take the job–was gone. (Rational was later purchased by IBM, and AFAIK some of the software tools Rational developed are still being sold and supported.)

Strange as it may seem, a company in England sold 432 based systems for the job they were originally intended for–provable code for safety-critical systems. The company is still in business and providing safety-critical code, but apparently not on any Intel hardware. The long-term result of that is that the Ada compiler which is part of GCC will also compile–and prove–safety-critical code. The safety-critical version of Ada is either referred to as SPARK or as the Ravenscar profile.*

Back to TSMC design rules.

Design rules are almost always today implemented by using software from Cadence Design Systems or one of a half-dozen other companies. If you are going to modify a chip design so it meets the design rules, the first step is to have the design description in the proper language for the tools that TSMC uses. (You don’t even want to think about a case where TSMC updates the design rules, and you suddenly have to port them to a different software chain.) If you have your design in Cadence Assura or Pegasus, and you run it against TSMC’s design rules, how many messages about missing some rule’s requirements do you expect? An estimate to the nearest billion will do. :frowning:

I hope you didn’t start a print job. The way you do it is to start with a key, but the small, area of the design and run it through the validation tool. In a week or three, when you have that section reporting no errors, try again with a nearby section. Eventually, you hope to have those sections fit together like puzzle pieces. Of course, that process will introduce new errors. Doing this when you are starting with a design that already complies with TSMC design rules at a larger scale can be (relatively) painless if the new rules for the new node are less strict or even just as strict proportionally as for the previous node.

Trying to do this with a design made according to (non-TSMC) design rules is going to be painful and take years. Note that I just slipped a nasty in here. I started with a design using some other software, conversion is going to be painful but doable. Now, we have the software compatibility issue out of the way, and we are at the real test. You need to modify the design so it will work with the new design rules. There are tools, for example, to deal with shrinking a fan-out of three or more down to two. You will also have to deal with timing constraints. An adder or multiplication unit that doesn’t finish in time or at least has the right output bits, even if there is some computation still going on. (An example might be that the status flags could be correct later than the addition or multiplication.)

For some chips, starting from a blank sheet of paper might be the fastest path to a usable design. By then TSMC will have doubled its high-end production capacity. So the only potential risk is a company already using TSMC as a fab. Would TSMC ditch AMD if Intel asked them to? The image that brings to mind is sleeping in a cage with a hungry tiger.

  • Contrary to popular opinion, the Ravenscar profile doesn’t forbid exceptions. But it must be documented which handler will handle the reception, and there should be a separate test for it. For example, if a telephone line or other remote connection is broken, an exception is the cleanest model for dealing with it. But you will need to demonstrate that the break that caused the exception will not result in a blockage of any other tasks. Think about a line being dropped in the middle of committing a transaction. (Shudder for a second then move on.) I’m not asking you to implement it, and Ravenscar or SPARK will help a lot. Ada exceptions are carefully designed to make it easy to handle such exceptions cleanly, including canceling other tasks if necessary. (Can you tell that I loved working on language design. :wink:

Totally correct, but I wanted to add to it. Why doesn’t AMD use TSMC’s most recent process?

Apple.

TSMC has an “Apple first” policy. With any new TSMC process, Apple gets first rights on it. This is the Tim Cook playbook. Apple finds a competent partner, then funds a new production line in return for guaranteed production levels. Once Apple is done with that line, the partner can offer it to other customers.

It is possible that AMD will use N4 or even N3 for Zen 4c (Bergamo).

“possible” if Apple were to cut production levels back, and AMD outbid Qualcomm, Nvidia, Intel, and everyone else. Or TSMC changed their “Apple first” policy, and risked angering the source of 25% of their revenue.

Apple Books TSMC’s Entire 5nm Production Capability
https://www.extremetech.com/computing/315186-apple-books-tsm…

Apple will be TSMC’s first customer with access to its 3 nm nodes
https://www.notebookcheck.net/Apple-will-be-TSMC-s-first-cus…

TSMC “Apple-first” 3nm policy leads to AMD and Qualcomm mutiny
https://www.club386.com/tsmc-apple-first-3nm-policy-leads-to…

4 Likes

TSMC “Apple-first” 3nm policy leads to AMD and Qualcomm mutiny
https://www.club386.com/tsmc-apple-first-3nm-policy-leads-to…

Don’t get misled by clickbait headlines. Why is AMD looking at Samsung’s 3nm process? Because TSMC has been having troubles with their N3 process. In particular, TSMC has switched back to FinFETs from GAA (Gate All Around) for their N3 process. That will make design rule fan-out limits lower and/or require wider transistors where large fan-outs are required. (Large in this case might be 3, but 4 is possible. Five is right out! :wink:

https://www.youtube.com/watch?v=xOrgLj9lOwk

AMD chooses the technology to use, not on the basis of what the smallest die size will be, but a lot of other factors.

Note that one of those factors almost certainly is market competitiveness.

When I was at NVIDIA, we were told that we strategically held back a product or two just because there was no real competition for the current product. Similarly, if AMD thinks they have a winner on a cheaper (older) process, they may chose to use that instead of a a more expensive newer process. Certainly NVIDIA has released plenty of graphics cards over the last few years that seem to be a node behind AMD and yet are winning in the market. Same with adopting more expensive memory technologies, coolers, etc, etc.

m

4 Likes