NVDA 4th Quarter Earnings

Quick Take - Looked like a really impressive quarter from Nvidia - they beat revenue estimates by 10.5% - EBIT guide by 26.1% - and they beat next quarter revenue estimate by 9.1%.

  • Record quarterly revenue of $22.1 billion, up 22% from Q3, up 265% from year ago
  • Record quarterly Data Center revenue of $18.4 billion, up 27% from Q3, up 409% from year ago
  • Record full-year revenue of $60.9 billion, up 126%
    -Operating income up 563% YoY
    -Net income up 491% YoY
    -Gross margins up to 76.7%

Incredible operating leverage - Revenue was up 265% YoY but operating expenses were only up 25% YoY

“Accelerated computing and generative AI have hit the tipping point. Demand is surging worldwide across companies, industries and nations,” said Jensen Huang, founder and CEO of NVIDIA.

“Our Data Center platform is powered by increasingly diverse drivers — demand for data processing, training and inference from large cloud-service providers and GPU-specialized ones, as well as from enterprise software and consumer internet companies. Vertical industries — led by auto, financial services and healthcare — are now at a multibillion-dollar level.

“NVIDIA RTX, introduced less than six years ago, is now a massive PC platform for generative AI, enjoyed by 100 million gamers and creators. The year ahead will bring major new product cycles with exceptional innovations to help propel our industry forward. Come join us at next month’s GTC, where we and our rich ecosystem will reveal the exciting future ahead,” he said.

NVIDIA’s outlook for the first quarter of fiscal 2025 is as follows:

  • Revenue is expected to be $24.0 billion, plus or minus 2%.
  • GAAP and non-GAAP gross margins are expected to be 76.3% and 77.0%, respectively, plus or minus 50 basis points.
  • GAAP and non-GAAP operating expenses are expected to be approximately $3.5 billion and $2.5 billion, respectively

Full Press Release:

NVIDIA Corporation - NVIDIA Announces Financial Results for Fourth Quarter and Fiscal 2024

–Long NVDA 11.9% and SMCI 4.9%, both are up ~10-11% AH


Adding some take aways from the call,

  • GAAP EPS $4.93, up 765% yoy, 33% qoq
  • Adj EPS $5.16, up 486% yoy, 28% qoq
  • Guide is for 24B revenue next quarter
  • The world has reached a tipping point of new computing
  • Data center infrastructure is rapidly transitioning from general purpose to accelerated computing
  • Companies may accelerate every workload possible to drive future improvement in performance, TCO and energy efficiency
  • Compute revenue grew more than 5x and networking revenue tripled from last year
  • We are delighted that supply of Hopper architecture products is improving
  • Approximately 40% of data center revenue was for AI inference
  • Large cloud providers represented more than half of our data center revenue
  • Companies from search to e-commerce, social media, news and video services are using AI for deep learning-based recommendation systems
  • Consumer Internet companies are investing in generative AI to support content creators, advertisers and customers through automation tools for content and ad creation
  • Enterprise software companies are applying generative AI to help customers realize productivity gains
  • ServiceNow’s generative AI products in their latest quarter drove their largest ever net new annual contract value contribution of any new product family release
  • Foundation of large language models is thriving
  • Significant adoption in verticals such as automotive, health care, and financial services
  • In health care, digital biology and generative AI are helping to reinvent drug discovery, surgery, medical imaging, and wearable devices
  • American Express improved fraud detection accuracy by 6% using NVIDIA AI
  • Growth was strong across all regions exception for China, where our data center revenue declined significantly, following the US government export control regulations imposed in October
  • China represented a mid single digit percentage of data center revenue in Q4, expect it to stay in a similar range for Q1
  • Sovereign AI has become an additional demand driver
  • Our Quantum InfiniBand solutions grew more than 5x yoy
  • Leading OEMs, including Dell, HPE, Lenovo and Super Micro are partnering with global sales channels to expand AI solutions worldwide
  • Gaming revenue was 2.87B, flat sequentially and up 56% yoy (Note they expected gaming revenue to be down sequentially)
  • Enterprises are refreshing their workstations to support generative AI-related workloads such as data preparation, LLM fine-tuning and retrieval augmented generation
  • Gross margins benefitted from favorable component costs
  • Accelerated computing can dramatically improve your energy efficiency, can improve data processing by 20:1
  • “The amount of inference that we do is just off the charts now”
  • Almost every single time you interact with ChatGPT you know that we’re inferencing
  • Seeing sovereign AI infra being built in Japan, Canada, France, and so many other regions
  • AI generation factories are going to be in every industry, every company, every region
  • GPUs are in every single step of a recommender system now
  • Supply is improving, supply chain is just doing an incredible job for us
  • NVIDIA Hopper GPU has 35,000 parts, weighs 70 pounds
  • Doing incredibly well with Spectrum-X, brand new product into the world of Ethernet
  • Reset product offering to China, and now sampling to customers there, doing their best to compete their with new restrictions
  • Every data center will be accelerated so the world can keep up with the computing demand and increase throughput while managing cost and energy
  • NVIDIA AI supercomputers are essentially AI generation factories of this industrial revolution
  • Every company and every industry is fundamentally built on their proprietary business intelligence, and in the future, their proprietary generative AI
  • Generative AI has kicked off a whole new investment cycle to build the next $1 trillion of infrastructure of AI generation factories

This is really an incredible story here backed up by huge numbers. These results also include a massive slowdown in China for the new regulations where NVIDIA is back to the drawing board creating samples for customers there.


My opinion: The reason NVDA does so well for AI applications is that the architecture of an NVDA GPU solution matches the architecture of an AI computation problem. A GPU is like a superfast calculator set up to do billions of the same kind of calculation (matrix calculations). The CPU that controls the AI solution breaks up a problem, then hands out calculation assignments to the GPU co-processors, gathers the results and does it again. The GPU co-processors can do millions of repetitive calculations simultaneously while the CPU orchestrates them. A general CPU architecture does not do this as well as the CPU with GPU co-processing units.

Think of the movies you have seen on the Manhattan Project. A few theoretical physicists doing calculus and differential equations to form a problem, then handing the numbers and equations off to a room full of people who did the thousands of numerical calculations simultaneously by hand.


Thanks, @FoolishJeff @wpr101 @ibuildthings, a couple points that stood out to me.

First, on @FoolishJeff’s point on the operating leverage:

It was even better than that. OpEx for 2024 was up just $197m from 2023, an increase of only 1.8%. Essentially OpEx was flat while revenue skyrocketed. Hard to get more leverage than that!

Note that the forecast for 2025 calls for OpEx to increase in the mid-30% range to support increased investment in future opportunities.

Also interesting: At least some of the NVDA software is now licensed on an annual basis under the name “NVIDIA AI Enterprise”. The license cost is $4500/yr/GPU. Presumably that’s list price, and there are discounts off of that, but that’s quite a shift in business model for the industry.

Instead of purely relying on up-front purchases, NVIDIA is now charging an ongoing toll as well to drive on its AI roads. As the value of that software increases and the number of GPUs paying that toll increases each year, we can foresee this becoming a larger part of the NVIDIA revenue stream. It’s currently at a $1b run rate.



Thanks for pointing that out.

It was a really remarkable report, and I’m still trying to wrap my head around it, not to mention the implications for many companies in related industries (pstg, smci). And even those in seemingly distant industries (iot…).


Best POD I’ve seen on the wise and where force of AI.

, https://x.com/jaminball/status/1760722388654936083?s=61 … At ~1 hour, Brad presented the rational for ‘the build out’. Why it’s so hard to design chips and then build fabs. The first slide shows a breakdown of what Jensen Huang, CEO of NVidia meant when he said that there is about a trillion of Data Centers currently and in the next 4- 5 years there will be $2Trillion in total asset cost of Data Centers and all of it will be accelerated. The green line is concensus estimates for the future market share of Nvidia.

They went on to say why they think Nvidia Market share for accelerated computing will continue to be well above 85% for at least those 5 years.


This is just a nice chart of the companies discussed.





The podcast was very interesting – thanks for that. With regard to other companies coming out with competitor chips, they describe very succinctly just how unlikely something like that is. For those just intent on designing, such as Amazon or Google, they will be relying on the same fab as nvidia. So they are years away from rolling out an unproven chip, through a congested fab pipeline.

For those planning to ‘design a chip and build a fab’ (Sam Altman, and maybe others), they were very polite but it was clear at least to me that Altman’s pronouncements in this area should be taken as seriously as when a 6 year old claims he’s going to be an astronaut.

Related to Altman’s plans, they spent some time talking about the much reported claims that sovereign states were serious about having their own datacenters (in other words not being reliant on the cloud providers). They didn’t have strong opinions one way or another as to whether that in the end would likely happen, but they did say that there is only one reason why a country would want that kind of capability (national security purposes, obviously, and not for the local automaker…).

Altman and others have mentioned very very large sums, on the order of several trillions, and they noted that there are only a very small number of economies that can manage that. Further, the added energy that all these new datacenters will consume will very significantly affect the established grids. They pointed out that the middle east oil producing countries are perhaps the only countries with both an abundance of funds and energy, and also not coincidentally they appear very serious about building out this infrastructure.

Finally, and this was new to me, they pointed out that datacenters are rapidly depreciating assets, and require constant renewal. So, this build out that is happening will be repeatedly renewed going forward. Nvidia’s market is not the single sale of gpus for a datacenter – they have a return customer, for as long as a datacenter is a ‘thing’, and as long as their chips are the standard, which by all accounts is for the next 5+ years, if not 10+.

I trimmed at about 700$, but I am now thinking that was unwise.


That was a good podcast, thanks. A pretty rare thing, IMO.

At the very end, Brad talks about an inevitable “zone of disillusionment where you have a mismatch between supply and demand” that has happened with internet, mobile, etc… He’s going to use that as a buying opportunity for AI.

Reminds me of what’s happening right now with EVs. I do think it’s too early to buy (back) into TSLA (or even AEHR), but at some point the world (including Jonas) will realize that hybrids really aren’t the solution and EVs will rise again.

The question now for us NVDA and SCMI holders/wanna be owners is do we wait for that trough of disillusionment? They pointed out in the podcast that the B100 (as a successor to Nivida’s H100) will be even more expensive than the H100, but since it’s so much faster the cost per compute minute will actually be cheaper.

What they didn’t talk about was that before the B100 comes out, the H200 will be coming out. That’s the true sucessor to the H100.

At any rate, if you’re interested in Nvidia, this is a good short primer:

The AI Chip Behind Nvidia’s Supersonic Stock Rally: QuickTake

When we first started discussing NVDA (back in the $400s, just a few months ago), the knock against it was the “law of large numbers,” which is a real thing. The stock has practically doubled since then. My estimation is that it’s got another double in it, but beyond that I have to agree it seems hard to imagine Nvidia’s market cap growing much further, despite the fact that it owns 80% or more of the AI chip market. And, as pointed out in the podcast, the odds of newcomers unseating Nvidia are really bad. For Sam Altman’s $7T idea, they threw out a 20% chance of designing a better chip than Nvidia as Nvidia keeps designing better chips themselves, and another 20% on building a Fab (manufacturing plant) that can produce viable chips at scale with a low enough defect rate to be profitable. So, that’s 20% of 20%, which is 4% overall.

And, again echoed in the podcast, outside of AMD and Intel, the companies working on AI chips are doing so aimed at their specific use cases: Tesla, Amazon, Meta, Microsoft. They may be able to design a chip that’s better for them than Nividia’s more general purpose chips, but they still have to get it manufactured (who is TSMC going to service first, eh?), and even then, it’s limited for their specific use cases. I look at Tesla, with a big announcement, including hiring Jim Keller, several years ago and Tesla is still buying (according to Elon) “all the H100s we can get.”

I think the good side of AMD and Intel is that at least it stops any talk of monopoly action against Nvidia.


Hi Smorg,

Thanks for you take on the AI build out. Since not that many are posting on this thread, I’d like to ask about your comment…

but at some point the world (including Jonas) will realize that hybrids really aren’t the solution and EVs will rise again

What EV slowdown?

Dan Ives,

“Our new EV survey says 50% of individuals are considering an EV vehicle when looking at a next vehicle purchase with cost/range key. The top takeaway from survey is 50% of new buyers are considering EVs up from 14% that own an EV today.”

China, slowdown maybe, but EV growth there will continue to rule the day, despite their ecomics woes.

In the US, there are days of supply issues with non-Tesla , but not a high gross # of units and they’re at higher price points. They’ll never put the capacity in as the more units they produce the more money they lose and none of the US OEMs are adding EV capacity.

In the US, if we see 13% adoption this year on a SAAR of 15.7M, that’s 2M EVs someone is going to have to produce… <100K in inventory today? Even 200k… I wouldn’t call that a glut . 100K of EV inventory is 15 days …”

Gary Black,
Global EV demand grew by 30%+ in 2023 and is likely to grow by 30%+ in 2024. Surging EV adoption is the basis for our $TSLA investment thesis: Global EV adoption going from 12% in 2023 to 60% by 2030 translates to +26% CGR even with no change in SAAR.

If TSLA just holds its EV share (currently ~18%), we estimate TSLA EPS can grow by +35% CAGR between 2023-2030, justifying its current 58x 2024 P/E (TSLA 2024 PEG of 1.7x equal to Mag 7 avg 2024 PEG of 1.7x).

I just realized that my chart included hybrids, I now see your point.



So…back to Smorgasboard’s post on the above mentioned POd cast…

  • The question now for us NVDA and SCMI holders/wanna be owners is do we wait for that trough of disillusionment? They pointed out in the podcast that the B100 (as a successor to Nivida’s H100) will be even more expensive than the H100, but since it’s so much faster the cost per compute minute will actually be cheaper.*

I believe this is worthy of revisiting, thanks Smorg. I believe the distinction between hype cycle and adoption, as written out in this picture is really important.

S-curve of technological adoption is more important to me when making investments.



As the Captain says,
Trading is not investing but one cannot invest without trading. For a long time I tried to understand what “long term buy and hold” really means. It’s a strategy that aims to find good companies that one can hold for a long time but the execution of that strategy is not just buying and forgetting. It’s John Boyd’s OODA Loop, it’s what Saul continuously does with his portfolio

Observe–Orient–Decide–Act (rinse repeat)





The All-In Podcast covered Nvidia pretty well, hitting many of the points we’ve been discussing, including comparison to Cisco, how the software (application) is the next wave, that Nvidia has a share buy-back program in place - last quarter spending $2.7B of the program’s $25B size, and that we’re at the beginning of a decade long wave.


Jason, I’m not sure the TALC applies here in quite the way I think you are suggesting. In other cases where I have seen it applied the peak of the hype is lots of excitement, but limited real functionality, but here the functionality is already there and leaping forward. I think the applicable curve here is the S curve of adoption and we are already well into the steeply rising portion, any trough well behind us.


Just so happens that I’m in China at present. Based on my unscientific casual observation, I would guestimate that EVs comprise 50% or more of the cars on the road (and there are lots of cars on the road. I’ve read about the crashing Chinese economy in several different publications, but honestly, if I hadn’t read about it I wouldn’t know it was going on.

Anyway, I recently came across this interesting stat regarding EVs in China, there are (or were) something like 300 different manufacturers of EVs here. Obviously, most of them will not survive, and a number of them have already folded.

Be that as it may, I don’t believe that there’s a slowdown in EV adoption here. 2023 EV production in China is estimated at 8 million vehicles.


Thanks @brittlerock! Do you have a guesstimate of the share of Tesla cars you saw?


Surprisingly few. I’ve not been in China for about 5 years due to covid. When I was here in 2019 I was surprised at how many Teslas I saw. It also seemed that Tesla was really the only EV to be seen (but I freely admit, I may have seen more EVs that I was aware of). Nevertheless, I had expected a repeat of that this year only to actually observe relatively few Teslas.

Don’t run with this as being representative of China. First, I’m absolutely no car expert and with the proliferation of brands it’s damn near impossible to really know which ones are electric v. hybrid v. gas.

In addition, I spend the majority of my time in Guilin, my wife’s home town. It’s a medium size city in southeast China. I have no idea how representative it may be of greater China, but for sure it is not a trend setting city. My best guess is Shanghai would be a trend setter.