Hardware benefitting more then software from AI

I am trying to understand why hardware companies like Nvidia and Super Micro Computer are benefitting dramatically more from AI then software companies like Cloudflare, Snowflake, and Datadog.

NVDA and SMCI are massively raising guidance, assuring visibility multiple quarters out, and producing huge increases in growth. SNOW, NET, and DDOG are effectively blind sided by what the demand environment looks like and putting up guidance that is lower then the market expected.

First we have companies like NET and SNOW saying things that would make an investor think they are huge beneficiary of AI. Here’s what NET said two quarters ago,

Our largest R2 customer, is an AI company and they’re growing at just extraordinary rates as they put more data into their models

And here is what SNOW said this past quarter,

And with AI, I mean it’s going to drive a whole other vector in terms of workload developments

Now comparing to the incredible report NVDA had, here’s some of the more interesting bits they had about massive demand for AI,

Generate AI is driving exponential growth in compute requirements

CSPs around the world are racing to deploy our flagship Hopper and Ampere architecture GPUs to meet the surge in interest from both enterprise and consumer AI applications for training and inference.

We expect this sequential growth to be largely driven by Data Center, reflecting a steep increase in demand related to generative AI and large language models.

Our generative AI large language models are driving this surge in demand, and it’s broad based across both our consumer Internet companies, our CSPs, our enterprises, and our AI startups.

And what happened is when generative AI came along, it triggered a killer app for this computing platform that’s been in preparation for some time. And so now we see ourselves in 2 simultaneous transitions.

With generative AI becoming the primary workload of most of the world’s data centers generating information, it is very clear now that – and the fact that accelerated computing is so energy efficient, that the budget of a data center will shift very dramatically towards accelerated computing, and you are seeing that now.

We’ve seen it in a lot of places now that you can’t reasonably scale out data centers with general purpose computing and that accelerated computing is the path forward. And now it’s got a killer app. It’s called generative AI.

So here you’ve got the CEO of Nvidia saying this is a iPhone moment, a killer app, driving incredible demand, and giving them visibility into the future.

Now here is an interesting paradox, the generative AI models are a software invention but they do not benefit software companies as much as hardware ones.

Here’s the crux of the issue, this is how much ChatGPT (OpenAI) spends per day: According to his analysis, running ChatGPT costs approximately $700,000 a day . That breaks down to 36 cents for each question. ChatGPT creator OpenAI currently uses Nvidia GPUs for computing power for the bot as well as its other partner brands.

Each prompt or query to ChatGPT and like products uses an absolutely MASSIVE amount of compute. It costs them 36 cents per query to run every single query. They charge $20 a month for unlimited queries on ChatGPT, so they lose a lot of money on customers who query often.

Also consider Bard by Google, it is completely free, and rivaling ChatGPT in terms of results already.

Let’s thing about what happens on a typical query for a company that is using an LLM model like ChatGPT where they are setup with R2 from Cloudflare, Datadog for logging, and Snowflake for data warehousing. Let’s also assume this same company uses chips from Nvidia and Super Micro Computing.

Datadog may log the query/prompt the user inputs, and possibly log the response that GPT output (or not). This is simply a linear increase, or a single log line per query, which doesn’t add much to growth.

Let’s look at R2/Cloudflare Workers. It’s relatively cheap, the client company is charged for hitting a database or a call to the Worker. While it’s consumption based, it’s still relatively linear in terms of growth.

On the Snowflake side the GPT query might be send over and stored for some analytics, where the business analyst uses the product to create some informational reports. Again linear style growth.

Now turning to what happened on the hardware. The GPT query reaches out and runs an EXPONENTIAL amount of increase of compute to gather results. It literally does billions of calculations on each query. This is way more demand on the GPUs and compute than anything like gaming or even cryptocurrency is sending to the data center.

Generative AI computes results from scratch every single time. So if a user goes there and types ‘Tell me some good restaurants in New York’ ten times, it doesn’t store anything. It does those billions of calculations each of those ten times, costing 36 cents per query or $3.60 total, and thus drives the exponential demand for hardware.

Meanwhile, the software gets a linear increase. An extra row or log in a database, or an extra invocation of a database call and worker.

45 Likes

@wpr101

I am trying to understand why hardware companies like Nvidia and Super Micro Computer are benefitting dramatically more from AI then software companies like Cloudflare, Snowflake, and Datadog.

I might be able to help a little on the hardware half of the question.
It’s important to understand that it’s possible to create a range of AI outcomes using the same corpus as input, in the same way that it’s possible to create a low-resolution and ultra-high-resolution rendering of the same picture/image.

A “low resolution” AI capability would be characterized by only being able to predict a word or two ahead, given an analyzed corpus and a query. For instance, a low-res AI could be expected to guess the next word in this sentence: “The Queen really digs Buckingham _______.” If the AI correctly predicts the word “Palace” we might conclude it is capable of guessing one word ahead, given a query. One term for measuring this is “context”; in this case we might assert the AI has demonstrated it is capable of handling “context-1” queries.

Such a C1 AI could fit entirely on your phone, including the trained-model and the query capability. And we might denote the compute power required for this “low-resolution” AI as 1X.

The key thing to know on the hardware side is that as your requirements demand a LINEAR increase in the distance between words (“context”) that the AI can analyze, it drives a GEOMETRICAL increase in the compute power required to create the trained-model.

So. On the hardware side. Think of the gap between what chatGPT can do NOW vs. what we can IMAGINE we’d like it to do. The gap between those two points can be achieved ONLY by ORDERS OF MAGNITUDE more computing power than was used for the current chatGPT.

Also consider that the models will have to be constantly retrained as data is added to the relevant corpus’.

To me these are some of the reasons why $NVDA sports the P/E it has right now.

I highly recommed starting here and clicking around:

46 Likes

It may be just a timing thing. Chamath Palihapitiya talked about the timing in the most recent “All-In” podcast. Here’s a diagram showing stock sector performance for companies involved in the mobile internet:

The first wave is chips, second wave is infrastructure & devices, and then the last wave, which is the largest wave, is the software and services sector.

While I believe this won’t straightforwardly map to the AI world (what new devices are needed and Infrastructure is the server-side deployment of the GPUs), I do think that we’re just starting to see what the software and services side of AI will be - and that it will eventually be bigger than the chips/infrastructure side.

That would mean: Nvidia today for Wave1, companies like SMCI, AWS, Azure, GCP, etc. for Wave2, and then the software companies. So, some patience might be warranted, or one could invest more in wave1 and wave2 companies today, being ready to move into the software side in a year or so.

70 Likes