Nvidia FY 2025 Q1 Results out

From a story perspective, CEO Jensen Huang has talked about a seismic shift in computing before, and repeats that in this ER call:

Longer term, we’re completely redesigning how computers work. And this is a platform shift. Of course, it’s been compared to other platform shifts in the past. But time will clearly tell that this is much, much more profound than previous platform shifts. And the reason for that is because the computer is no longer an instruction-driven only computer. It’s an intention-understanding computer. And it understands, of course, the way we interact with it, but it also understands our meaning, what we intend that we asked it to do and it has the ability to reason, inference iteratively to process a plan and come back with a solution. And so every aspect of the computer is changing in such a way that instead of retrieving prerecorded files, it is now generating contextually relevant intelligent answers . And so that’s going to change computing stacks all over the world. And you saw a build that, in fact, even the PC computing stack is going to get revolutionized. And this is just the beginning of all the things that – what people see today are the beginning of the things that we’re working in our labs and the things that we’re doing with all the startups and large companies and developers all over the world. It’s going to be quite extraordinary.

Layman’s summary:

It’s not trying to just detect the cat, which was plenty hard in itself, but it has to generate every pixel of a cat. And so the generation process is a fundamentally different processing architecture.

.
And then the Soverign Nation TAM that Huang started talking about a quarter or two ago:

From nothing the previous year, we believe Sovereign AI revenue can approach the high single-digit billions this year. The importance of AI has caught the attention of every nation.

.
Also, there was some worry earlier about pausing of Hopper, in effect, that Blackwell Osborned it. Jensen said to that:

We see increasing demand of Hopper through this quarter. And we expect to be – we expect demand to outstrip supply for some time as we now transition to H200, as we transition to Blackwell. Everybody is anxious to get their infrastructure online. And the reason for that is because they’re saving money and making money, and they would like to do that as soon as possible.

And, when pressed o the potential Osborning since the new Blackwell is so much faster:

If you’re 5% into the build-out versus if you’re 95% into the build out, you’re going to feel very differently. And because you’re only 5% into the build-out anyhow, you build as fast as you can.

So that’s the smart thing to do. They need to make money today. They want to save money today. And time is really, really valuable to them.

The reason for that is because the next company who reaches the next major plateau gets to announce a groundbreaking AI. And the second one after that gets to announce something that’s 0.3% better. And so the question is, do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better?

And you gotta love how Huang ends the call:

Well, I can announce that after Blackwell, there’s another chip. And we are on a one-year rhythm.

61 Likes

This also sounded positive…
Stacy Rasgon

Hi, guys. Thanks for taking my questions. My first one, I wanted to drill a little bit into the Blackwell comment that it’s in full production now. What does that suggest with regard to shipments and delivery timing if that product is – doesn’t sound like it’s sampling anymore. What does that mean when that’s actually in customers’ hands if it’s in production now?

Jensen Huang

We will be shipping. Well, we’ve been in production for a little bit of time. But our production shipments will start in Q2 and ramp in Q3, and customers should have data centers stood up in Q4.

Stacy Rasgon

Got it. So this year, we will see Blackwell revenue, it sounds like?

Jensen Huang

We will see a lot of Blackwell revenue this year.

Overall the triple digit growth keeps coming but what I found re-assuring going into lapping the year ago quarters after the Nvidia revenues blew up, was the quarter on quarter growth rate of 18% indicating ongoing growth beyond the YoY triple digit growth since inflection - suggesting very high double digit growth continuing.

There were some re-assuring points for SMCI holders in the detail although I’m watching in case Nvidia moves closer to another competitor or gets interested in serving clients direct.

Ant

42 Likes

The idea of Nvidia building racks and selling directly to the customer rather than selling product to Supermicro and others can’t be a novel notion. For sure Jensen has considered it. And despite the fact that he and Charles have been close friends since childhood, I’m confident that Nvidia would do that if ti made economic sense.

But building racks is a low margin business, highly dependent on volume to produce outsized revenue and earnings. I’m sure that this would just be a dilution of resources for Nvidia. I’ve got positions in both companies. I don’t worry about Nvidia competing with Supermicro. And so far, I don’t worry too much about Dell, HPE, etc. But if Supermicro is going suffer a loss of sales it will be to one of their present day competitors rather than a new entrant, and Nvidia in particular.

9 Likes

Does anyone know how many configurations SMCI supports for Nvidia-based systems?

This from Huang on the Nvidia call:

Blackwell will be available in over 100 OEM and ODM systems at launch, more than double the number of Hopper’s launch and representing every major computer maker in the world.

And then, I think this explains what Nvidia is doing today:

we’re delivering systems that perform at scale. And the reason why we know they perform at scale is because we built it all here. Now one of the things that we do that’s a bit of a miracle is that we build entire AI infrastructure here, but then we disaggregated and integrated into our customers’ data centers however they liked.

Which, to me, sounds like they design and build a complete system, and than make the separate components available (the “disaggregated” part) for other system builders to combine as their customers (the end users) want.

Which has also been key for Nvidia being to remain on top of the AI world despite rather large changes in how AI software works. From CNNs to Transformers, from image recognition to image generation, AI software has evolved a lot, yet Nvidia remains the best hardware (and platform software) on which to run. More from Huang:

The versatility of our platform and the fact that we design entire systems is the reason why over the course of the last 10 years or so, the number of start-ups that you guys have asked me about in these conference calls is fairly large. And every single one of them, because of the brittleness of their architecture, the moment generative AI came along or the moment the fusion models came along, the moment the next models are coming along now. And now all of a sudden, look at this, large language models with memory because the large language model needs to have memory so they can carry on a conversation with you, understand the context. All of a sudden, the versatility of the Grace memory became super important. And so each one of these advances in generative AI and the advancement of AI really begs for not having a widget that’s designed for one model. But to have something that is really good for this entire domain, properties of this entire domain, but obeys the first principles of software, that software is going to continue to evolve, that software is going to keep getting better and bigger. We believe in the scaling of these models. There’s a lot of reasons why we’re going to scale by easily a million times in the coming few years for good reasons, and we’re looking forward to it and we’re ready for it. And so the versatility of our platform is really quite key.

TL;DR:
Nvidia builds systems to validate that everything they make all works well together, but with hundreds of possible combinations needed to support all the different end-user configurations and use-cases, Nvidia isn’t going to sell all the possible systems that are needed.

19 Likes

People need to remember that Jensen was hyping SMCI and their relationship long before he started talking about Dell. SMCI is building more factories for the demand they are receiving.

Now, the firm is looking to expand its business to the more advanced NVIDIA Blackwell AI GPU architecture, as reports suggest that Supermicro Computer has received an enormous order for Blackwell-focused AI server racks. Taiwan Economic Daily reports that SMCI is all set to ship a 25% supply of NVIDIA’s GB200-based Blackwell AI servers, with total units reaching the 10,000 mark. This time, it looks like Team Green has entrusted SMCI with more responsibility, which, if things go well, will ultimately translate into gigantic economic performance.

Supermicro (SMCI) Receives Huge NVIDIA Blackwell AI Server Orders, Amounting To 25% of Total Supply.

Andy

35 Likes

To what extent does this potentially parallel the 286/386/486 cycle of the 1980s and 1990s (some of you are old enough to remember those days), where new computing power bred more powerful programs that then made them obsolete. Is a planned obsolescence cycle emerging, where MSFT/GOOGL/META need to revamp/rebuild their datacenters as AI models become more or more complex? Or is this primarily about achieving and maintaining scale?

In other words, will todays datacenters be obsolete in five years? Or will it simply be an expansion of datacenters to accommodate ever increasing server demand?

Peter

15 Likes

The industry standard is to depreciate assets in datacenters from 3 to 5 years. The recent trend had been to extend them with Amazon going from 3 to 4 years in 2020. A bitcoin miner in their recent 10-k extended their depreciation from 2 to 5 years. Will Nvidia products be so good that force earlier depreciation and or write offs as companies have to switch to Nvidia to keep up with their competitors will be something to watch.

26 Likes

Refresh cycles have been part of the IT environment since before cloud computing was a thing. The performance of new technology eventually makes older technology obsolete. I was in IT at Boeing for 30 years. My experience goes back to a time when mainline corporate computing was centralized running on one (sometimes more than one), usually IBM mainframe computers. The terminals were “dumb” display devices with no internal computing capability. Storage was provided by magnetic tape which would be read in a linear manner, random access disc storage came later.

But that’s a good example of newer technology making old technology obsolete. Disc drives were revolutionary. Random access of data totally changed the way software was written. In the 80s, mini-computers emerged. They did not displace mainframes, but they changed the office environment significantly. The PC emerged in the early 80s, but it didn’t really begin to penetrate the corporate environment in a significant way until the end of decade when the “smart” terminal displaced the dumb terminal.

While preparing a presentation on semiconductor evolution, Gordon Moore observed that every 18 - 24 months a new cpu chip became available at the same price which had more or less twice the capability of its predecessor, or the price of an existing chip would be reduced by 50%. This observation proved true with the result of shortening the refresh cycle. Some corporations were willing to skip a generation, but unless severely cash strapped, skipping one generation would be the most.

So the answer to your question is most definitely “yes” though a five year cycle is probably excessive.

Jensen Huang (CEO of Nvidia) announced that Moore’s law is completely dead. He didn’t specify a time period for a new cycle, but he was clear that the price will rise. He also asserted that increased computing power and energy efficiency was a compound problem of hardware and software, “the full stack.”

15 Likes

I didn’t want to start yet another thread on Nvidia, so putting this here.

Here are the total revenue results for the last 5 quarters:
2025 Q1: 26.04B, Up 18% QoQ & Up 262% YoY
2024 Q4: 22.10B, Up 23% QoQ & Up 265% YoY
2024 Q3: 18.12B, Up 22% QoQ & Up 206% YoY
2024 Q2: 13.51B, Up 34% QoQ & Up 101% YoY (The “ChatGPT” moment!)
2024 Q1: 7.19B, Up 19% QoQ & Down 19% YoY

Guidance for next quarter is $28B +/-2%. If Nvidia does the high end, then the results would be:
20205 Q2: 28.56B, Up 9.7%% QoQ & Up 111% YoY

To match the 18% QoQ gain this last quarter, Nvidia would have to do $30.7B, which would be up 127% YoY. That seems a stretch, but not out of the realm of possibility, if fab production increases and TSMC hits Blackwell production with little problems.

But it seems fair to say that the days of Nvidia up more than 200% YoY are gone. If Nvidia is up only 8% QoQ for the remainder of the year, is its 39X forwards P/E reasonable or too high?

Blackwell is more expensive than the current Hopper, but is actually a better price/performance ratio than the current Hopper chips. As a 10% per quarter grower, what’s the right multiple for Nvidia? 15%?

When does the law of large numbers, which Nvidia has amazingly ignored to date, start affect this company? A 15% QoQ rate is 75% YoY, which is a lot for a $2.5T company.

21 Likes

Trying to understand why you would use the revenue growth rate and compare to P/E when the bottom number there is for earnings.

The last four quarters of revenue growth vs EPS growth,

Revenue 101% → 206% → 265% → 262%
EPS 854% → 1,274% → 761% → 629%

Nvidia has become dramatically more profitable in the last year and EPS has grown way faster then revenue.


Even Sam Altman CEO of OpenAI has said it’s too hard to predict even how AI will look one year from now, and in this current past year many things have surprised him with AI.

What Jensen has said though is that he sees a 4-5 cycle period at least before Hopper chips begin to get replaced. It’s crazy to think that Hopper is already obsolete technology because Blackwell is 30x faster.

However, customers want to buy Hopper if they can even if they know it’s far behind Blackwell because they don’t want to get left behind by the competition which is already training their models. One of Open AI’s current advantages is they spent a tremendous amount of compute to train their models, before anybody else was doing it on this scale. Their models are still maybe six months to one year ahead of other AI models.

Now everybody has to play catchup, including Google, Meta, Amazon, and Telsa. That’s in addition to “sovereigns” which are incredibly far behind now. Countries like Saudi Arabia, South Korea, and Japan now need to start spending massively on AI too. Jensen keeps talking about AI being a global phenomena, however last time I saw 95% of “the AI market” is produced by the USA, and Nvidia is the absolute dominant leader here.

30 Likes

I think that at present, Nvidia holds the patents on picks and shovels in a gold rush. Someone will come up with other designs that are performance and price-competitive, but when? The boom is right now.

8 Likes

Yeah, good point. My excuse is that I got rushed away from the computer to eat dinner before it got cold.

Great point. What metrics do you think are proper for evaluating NVDA from here?

The other thing he said was that Blackwell isn’t Osborning Hopper. That the ROI on companies building out their AI infrastructure is high and so they want to start earning that right away, rather than wait 6-9 months before earning that money. And companies want to be first with their AI software/product breakthroughs since being first gets all the attention. He also said that the AI build-out is only about 5% done, so there’s still 95% more to come, meaning customers don’t want to wait.

He also said that Blackwell is “backwards compatible” with Hopper, so I expect that the systems builders will have an easy time migrating to it from their existing Hopper-based systems.

22 Likes

Ignore the clickbait title, this video actually has a lot of good information on Nvidia’s products, how they’re configured, what’s new, and why they’re so great. Also summarizes financial results very well.

26 Likes