Nvidia confirms AI still growing like crazy

Revenue up 34% QoQ and 206% YoY
GAAP Gross Margin now at 74%, up 3.9 pts QoQ and up 20.4pts YoY
Non-GAAP Gross Margin now at 75%
GAAP EPS: $3.71, up 50% QoQ and up 1,274% YoY (!!)
Non-GAAP EPS: $4.02, up 49%QoQ and up 593% YoY (!)

Compared to WS Expectations, Revenue was $18.12B versus $16.18B expected and EPS was expected to be $3.37 (came in at $4.02) per share.

Even gaming was higher than expected by WS.

Guidance-wise, Nvidia is saying $20B for next Quarter, +/- 2%, which would be 231% growth YoY.

While they expect that sales to China and other US-restricted countries will significantly decline this upcoming quarter, they believe demand in other areas will more than compensate, and they are working on versions that meet the varied licensing requirements for future quarters.

CEO Jensen Huang talked not just about expansion of clouds for things like “co-pilots,” but also new Country Clouds where every major country in the world is establishing their own AI clouds trained on their own sovereign data.

I’m listening to the call now, and may post a follow-up after I’ve digested it some more.

70 Likes

My personal take-away from the call is that Nvidia is far more than just an AI silicon company. There was a lot of talk about various services being provided, mostly on the cloud. I’m still digesting the announcements and descriptions of:

Sovereign AI Clouds:
CEO Huang said “You’re seeing sovereign AI infrastructure. People now recognize that they have to utilize their own data, keep their own data, keep their own culture, process that data, and develop their own AI. … The number of sovereign AI clouds that are being built is really quite significant. And my guess is that almost every major region, and surely every major country, will have their own AI clouds.”

Nvidia AI Foundry for the “development and tuning of custom generative AI enterprise applications running on Azure. Customers can bring their domain knowledge and proprietary data and we help them build their AI models using our AI expertise and software stock in our DGX cloud, all with enterprise grade security and support. SAP and Amdocs are the first customers of the NVIDIA AI foundry service on Microsoft Azure.”

DGX cloud service
“Our monetization model is that with each one of our partners they rent a sandbox on DGX Cloud, where we work together, they bring their data, they bring their domain expertise, we bring our researchers and engineers, we help them build their custom AI. We help them make that custom AI incredible. Then that custom AI becomes theirs. And they deploy it on the runtime that is enterprise grade, enterprise optimized or outperformance optimized, runs across everything NVIDIA. We have a giant installed base in the cloud, on-prem, anywhere.”

Nvidia AI Enterprise Factory customers pay monthly for access to a sandbox and services from the NVIDIA team to help them build and train custom generative AI models. The enterprises can then deploy on NVIDIA AI Enterprise for $4500 per GPU per year.

Back to hardware, Nvidia is doing more than just GPUs, like AI-specific tuned networking (InfiniBand). And even on the compute side, it’s worth understanding a bit about the various “SuperChips” that Nvidia sells. Huang pointed out that one of their SuperChips, the HGX “Hopper” H100 has 35,000 parts and weighs 70 pounds. It’s more of a board than a chip, no wonder each costs tens of thousands of dollars.

While Nvidia is expecting to have its latest AI data center GPU, the Hopper H200 ready to ship by Q2 next year (and it’s 2X faster than the current H100), Nvidia also has a “Grace” data center chip, based on ARM. They’ve got a board/SuperChip that has two dies containing 144 ARM cores. The latest Apple M2 processor, for comparison, has just 8 ARM cores. It’s interesting to me that while ARM’s initial success was initially tied to lower power usage than Intel’s x86, which of course is important for battery powered phones and tablets, now Nvidia is advertising low power consumption for data center CPUs. Data centers are getting so large that saving 2X on power consumption and infrastructure costs, with the same or better performance is apparently a big deal.

As you may recall, Nvidia wanted to buy ARM the company, but the UK nixed that deal. Still, Nvidia is obviously a big licensee and that they’re coming out with an data center oriented ARM CPU shows Nvidia isn’t even just an AI play. And it introduces new data center architectures to enterprises. Nvidia has a “Grace Hopper” combination SuperChip that has a Grace ARM-based CPU paired with one or more Hopper GPUs for the data center. Even heavy AI workflows have non-AI compute requirements, so this combination should prove compelling.

Nvidia’s AI software, CUDA, remains a huge moat as it enables customers to upgrade Nvidia hardware with little to no changes in their software, and means switching to any other product, even if they wanted to, would be a big deal.

A profitable high-growth megacap that even pays a dividend. Imagine that.

65 Likes

How much of all of this is priced into the current stock price?

3 Likes

According to the market this morning, all of it and some.
But I will hold and maybe increase. The results are too mind-blowing. At this point, I will just trust Jensen Huang.

12 Likes

Depends on which parts of “all of this” one is talking about. The analysts covering Nvidia are all in the semiconductor field, but Nvidia is expanding beyond just making and selling chips. Only one of the questions in the conference call was about services, and that was basically asking what the heck it is and how Nvidia makes money on services. These analysts all think in terms of chip production and so when Huang talks about not just providing Cloud Access in various forms, but also consulting and subscription services (presumably most recurring), their heads explode.

So, what did analysts take away from the call?:
• They’re worried about export to China, etc. restrictions that Nvidia admitted will hurt growth for this next quarter and be a drag on future quarters until new products come out. This next quarter’s rev guidance was raised only about 10% over this last quarter, mostly due to the expected export revenue hit.
• They’re skeptical that AI has legs. I saw one article saying that AI is the new crypto for GPUs and is a fad that will fade as fast. One analyst on the call even asked if Data Center revenue will still be growing in 2025, lol.
• I wonder if they really understand what the ARM-based “Grace” chips mean for data centers. Ampere has had that ARM for server market pretty much to themselves, but that’s a private company so they may not be aware.

Forward PE is about 30 now, for what that’s worth.

57 Likes

Nvidia’s Forward PE is actually better than that:

GuruFocus has it at 24.28

• Last quarter’s EPS (non-GAAP) was $4.02. Morgan Stanley has next quarter’s EPS at $4.48 and for next next fiscal year (which starts after next quarter) at $20.02. That would equate to a 23.85 Forward PE.

• Should the China overhang not be as bad as is currently feared by Mr. Market, NVDA would be even more undervalued.

34 Likes

It seems Mr. Market is getting more worried about China export restrictions affecting Nvidia:

https://www.thestreet.com/technology/nvidia-us-government-at-odds-over-china

Over the weekend, U.S. Trade Secretary Gina Raimondo once again made it clear that there will be no loopholes for Nvidia to jump through.

“We cannot let China get these chips. Period," Raimondo said at the Reagan National Defense Forum on Saturday, according to Bloomberg. "We’re going to deny them our most cutting-edge technology…I know there are CEOs of chip companies in this audience who were a little cranky with me when I did that because you’re losing revenue,” she said. “Such is life. Protecting our national security matters more than short-term revenue.”

She reportedly even called out Nvidia, saying “If you redesign a chip around a particular cut line that enables them to do AI, I’m going to control it the very next day.”

As for “doing AI,” well, there are versions of generative AI software that run on everyday laptops and even high-end smartphones - all built in China today.

Nvidia CEO Huang points out that what we don’t sell to China, China will figure out how to make for themselves. At the high end of things, that’s going to be hard, but for the more middle of the road chips we’re talking here that’s extremely likely in my view.

24 Likes

Smorg, Does that reduce your confidence level in Nvidia?
Saul

7 Likes

Not yet - I bought more shares today.

But, I’ll be keeping tabs on Nvidia’s progress on this.

22 Likes

We seem to be pretty early in the adoption cycle still for AI. Most potential applications for the equivalent of “paving the cow paths” – doing what you did before, but now with AI – are still on the drawing boards. I believe we are some years away from the first applications which will be analogous to the “building the highways” phase (where you use the new technology to do things that were unimaginable previously).

What about competition? The hardware and software barriers are really high, measured in years (being generous to the competition).

What about China? Necessity is the mother of invention, but spycraft gets you there faster. It’ll take them many years to develop their own ability to match NVDA. In the meantime, given the stakes, count on them to harvest as many NVDA modules as they can get from the grey market. They’ll find a way to access bootleg copies of CUDA. Will that prevent them from developing their highest priority applications for AI? Probably not - but it sure will slow them down. And the “Spy vs Spy” fireworks around NVDA chips will get intense.

15 Likes

Despite Jensen Huang, CEO/co-Founder of Nvidia stating repeatedly in the last 6 months that supply of H100 GPUs are on track to 3X ‘this year’.

Wait times for H100 is near 52 weeks.

I’d say phase one of the replatforming of global businesses is rapidly developin and Phase 2, fleshing out of infrastructure, is imminent.

If phase three is the implementation of AI, reaping the benefits of massively improved productivity is on the way, eh?

Does it sound like I’m trying to time this? I’m not sure if investors have a choice. It’s coming😳.

Best,

Jason

16 Likes

The threat of in-house designed ASIC chips by Nvidia’s customers is not of concern. At least not in the near term and by near term, years. For now, they hope to off load as much of their need for dependency on NVDA as they can, but the best of the best will come from NVDA in the form of “as much as they can get” for what I think will be a very long time.

China is a don’t care for now, but what has been keeping the price down. The Congress strategy to keep China down will have a delaying effect, but China was always going to develop their own national Semiconductor and AI technology and will spend whatever it takes to do that. In the meantime, Jensen can still sell every H100 they can get out of TSMC. Demand is more than sufficient.

TSMC supply constraints is the only lid I see on NVDA revenue. Not demand , just supply.

Which brings us to the Server suppliers mentioned such as HP and Dell. Why do they even exist today in this market space? AMZN, MSFT and GOOG all develop their servers in house. In fact, many former HP engineers in my neighborhood now work for MSFT (or AMD). For the AI market, I suspect they will struggle to add value. Jensen is selling his chips in higher level forms such as the Server walls etc, bundled with the interconnect and software to make it work. Dell and HP and these companies are simply resellers in this model going forward it seems to me. They can be the middleman who has the help-line for the enterprise customers but they have long ago stopped developing technology. So I look for NVDA to sell their chips to a small number of large AI players and to sell system level solutions to the rest. There is just not enough chips to go around and there never will be until we reach the point that more performance per watt isn’t critical for AI applications.

It will be interesting to see going forward how much revenue per GPU is generated. If it is in the form of Server walls, you can expect the number to grow for some time. And I wouldn’t expect AMD or Intel to get large slices of this market in the near future as the technology is not just a chip, it’s so much more (i.e. software).

12 Likes

MFChips,

I think the dynamics you describe will force companies to bleed every last bit of performance not just from GPUs but from the AI system as a whole.

GPUs are just one component of the AI system, and any system of interconnected components has a lot of potential performance bottlenecks.

So ppl will be looking to identify and resolve all bottlenecks in the AI system…for instance: storage. I have a position in $PSTG; part of my thesis is that $PSTG is the best solution for eliminating performance bottlenecks related to storage utilized in AI tasks.

8 Likes

MF Chips, when you said, “look for NVDA to sell their chips to a small number of large AI players and to sell system level solutions to the rest. There is just not enough chips to go around and there never will be until we reach the point that more performance per watt isn’t critical for AI applications.”

I’m thinking you’re referring to…
The HGX platform, also known as Nvidia’s AI supercomputer, is built for developing, training, and inferencing generative AI models. It combines four or eight AI GPUs (such as A100s or H100s) together using Nvidia’s networking solutions (Infiniband) and NVLink technology and also includes the NVIDIA AI Enterprise software platform.
Tell me if I’m wrong.

As Injudo said, “GPUs are just one component of the AI system, and any system of interconnected components has a lot of potential performance bottlenecks.”. What I added above illustrates this point I think you’re making as well? The HGX Platform is Nvidia addressing all those bottle necks before anyone else and therefore completely dominating, per Nvidia Slide deck at last ER, this Trillion$ annual space

IMG_1153.png
During the COMPUTEX 2023 Jensen Huang announcement about the recent exponential growth of CUDA: Currently, CUDA boasts over four million developers, in excess of 3,000 applications, and a staggering 40 million CUDA downloads historically, with a phenomenal 25 million just in the previous year. Furthermore, 15,000 startups have been established on Nvidia’s platform, and a massive 40,000 large enterprises worldwide are utilizing accelerated computing.

How much pull does Nvidia have to get Oracle, GPC and Azure all to partner with DGX Cloud?

Done and done.

We’ll supply them, what’s that look like?
Here’s a data point…can’t remember where I got this one, sorry.

ASML makes a lot of the equipment that makes Nvidia GPUs

ASML released third-quarter 2023 results on Oct. 18. Though the company reiterated its 2023 revenue-growth forecast of 30%, it expects 2024 revenue to remain flat. Another alarming reading from the report was the 71% year-over-year decline in net bookings to 2.6 billion euros ($2.76 billion). The company received $9.45 billion in net bookings in the same period last year.

Yes…

H100 chips are manufactured using ASML’s extreme ultraviolet lithography (EUV) process, which is deployed for making chips on 7nm, 5nm, and 3nm nodes.

ASML got bookings worth $530 million for these EUV machines last quarter. Each machine costs around $200 million, which means that ASML received orders for just two of these machines last quarter.

However, this is not a red flag for Nvidia. ASML shipped 31 EUV systems in 2020, followed by 42 in 2021, and 54 in 2022. This year, the company is expected to ship 60 EUV systems. Given that each EUV system reportedly takes between one to two years to deliver to customers, it won’t be surprising to see it ship a healthy number of EUV systems in 2024 as well.

ASML was sitting on an order backlog worth $37.1 billion at the end of the third quarter. Its guidance indicates that it could finish 2023 with revenue of $28.6 billion, of which it has already delivered $21.5 billion in the first nine months of the year. So ASML has a solid enough backlog for 2024.

I’ll stop here but, yes I’m at ~14% and looking to add here

Best,

Jason

16 Likes

Yes, I am referring to HGX and future variations, including “as a service”

4 Likes

Another “read”: Lisa Su, CEO of AMD, says they expect the market for GPUs for AI to be $400b by 2027. For context, NVDA sold $14b worth of AI GPUs in the quarter just ended.

AMD, of course, wants a piece of that opportunity, but even she doesn’t believe that NVDA will be displaced as the dominant player by then.

Long NVDA since Fall 2018
Long AMD since June 2023

18 Likes

Is anybody familiar with TPUs or Tensor Processing Units? Google is using these over GPUs because they are optimized for AI applications over graphics.

Google’s Gemini AI came out yesterday and it performs marginally better then GPT-4. However, I’m guessing Q* or GPT-5 that OpenAI is working on outperforms Gemini.

Here’s a pretty good video summary that has a section on TPUs and the Google Gemini product, https://youtu.be/KQBA62yZURc?si=Beay5otHrgki5z5F

4 Likes

There’s a lot more needed than silicon to compete with Nvidia. There’s more than silicon and Cuda. Nvidia provides a ton of expert engineering consulting services. Nvidia has a gaggle of partners that provide related services (and I think some ancillary hardware products).

AMD will undoubtedly receive orders for their new chip. Most likely, there’s applications where it will be good enough. But I seriously doubt that they will put much of a dent in Nvidia’s order book.

9 Likes

I don’t think Lisa Su at AMD believes they will get a big chunk of those orders by 2027 – but they’re definitely gunning for an increasing share, and I believe it will become meaningful for their business.

I never thought they’d succeed in catching and passing Intel, but they’ve done exceptionally well on that score despite people like me discounting them for many years. I don’t intend to make the same mistake twice when the opportunity for GPUs is even more explosive than the opportunity for microprocessor CPUs was.

I’m curious, @brittlerock – aside from quantum computing, is there a company that you think represents the strongest competitive threat to NVDA at this point?

7 Likes

No, I don’t think any other companies come close to Nvidia with respect servicing the AI market which is still “growing like crazy.” I don’t think AMD will take share from NVDA. They’ll probably get a smallish portion of the TAM, but it won’t come by taking customers away from NVDA.

Earlier in this thread @intjudo said he/she was in PSTG as he/she felt that it will benefit from the AI boon. All that data has to rest somewhere. I think that’s a correct thesis. I owned and sold PSTG a few years ago. I didn’t have anything bad to say about PSTG when I sold it, I just felt there were other, better alternatives. I still think that’s true. For what it’s worth, I think NVDA will outperform PSTG.

NVDA is 12+% of my portfolio. If I had excess cash looking for a home, I’d put in NVDA or IOT or maybe even TSLA. Not as confident about TSLA - Musk drives me a wall. The stock is volatile. The CEO is also volatile. At times he’s pedantic. I can’t help but to think he’ll make Norway into an “example” of his anti-union stance to the detriment of the company.

He fails to recognize cultural norms. He appears to think the Norwegians should agree with him. He pretends to be very open minded and willing to embrace all freedoms. But in truth, he thinks everyone should agree with him. Just look at what he’s done with his $44B investment in Twitter now X. Over half of his investment is gone because he thinks freedom of speech and unfettered speech are the same thing. But yeah, I’ve got a position in TSLA, in spite of my feelings about Musk. Sorry about the digression.

BTW, I got out of IONQ with a handsome profit. I might get back in - but this is a market timing strategy, inappropriate discussion material for this board.

16 Likes