Does anyone think that competitors will start to take a bite out of NVDA sales?

Hi All,
TMF summarized a Barron’s article discussion how D.A. Davidson thinks that competitive pressures from internally produced chips at Google and Meta may decrease orders and drive NVDA stock down by as much as 30%.

Additionally, Intel has come out with a competitive chip that doesn’t sound quite as good as the top line G200 chip:

It sounds like Intel has some advantages over the older chips.

The devil will be with the details. Can Meta, Google, and Intel actually produce sufficient chips at cost, scale, and power efficiency to compete with NVDA? On the other hand, Jensen Huang said NVDA is hitting a tipping point with its business.

I’m considering reallocating some of my NVDA holdings. Anyone have anything to add?



I have been adding to my position at these levels, as I see the AI chip market expanding massively overall. The second article states that Nvidia has 80% of the AI chip market which is crazy when you consider how large and mature this market is.

Comparing to another market of the public cloud companies the break down looks like,

Amazon AWS - 33%
Microsoft Azure - 24%
Google Cloud - 11%
Then a number of providers in the 2-4% range: Alibaba Cloud, Salesforce, Tencent Cloud, Oracle Cloud

Still even with only 33% of the market Amazon AWS is enough to have huge profits because the market for Cloud is so big.

With regards to Intel’s Gaudi’s 3, it looks like they have a lot of catching up to do. The second article states that,

Intel says the new Gaudi 3 chip is over twice as power-efficient as and can run AI models one-and-a-half times faster than Nvidia’s H100 GPU

The issue I see with this is that Nvidia’s Blackwell which was announced is 25x more power efficient, and 30x faster than the H100.

It sounds like Intel’s strategy is to launch Gaudi 3 soon before Blackwell comes out, so Intel may have the fastest chip for a brief period of time. However, I think a lot of these customers will just wait for Blackwell to stay on the Nvidia platform, especially if Blackwell is going to be an order of magnitude faster and more power effecient than Gaudi 3.

I don’t really see Meta getting into the chip building game. They are Super Micro’s largest customer and seem content buying off the shelf AI servers.

Google has tensor processing units TPUs which are optimized for tasks such as machine learning. My understand is these chips are just used internally though.

My strategy with Nvidia is going to be add or hold at least until their market share significantly reduces or they report a bad quarter.

One last point to make is they are lapping a super easy comp on the previous year where they were supply blocked. Next quarter is Q1’25 on their calendar, and Q1’24 has these headline numbers,

Revenue: 7.2B
EPS: 0.82

For Q1’25 they are projecting,

Revenue: 24B
EPS: 5.50

So even if Nvidia only matches their guidance they are going to be reporting that revenue is up 233%, and EPS is up 571%!


40 years in the chip biz here, and I added to NVDA position today.

The benchmarks don’t begin to tell the whole story but H100 is old news.

But it’s the system solution including networking and software that is the full story.


Google already produces its TPU AI chips at scale for use in their own data centers. The tech giant’s latest AI chip, the Cloud TPU v5p, [was first announced in December], the same day as its chatbot Gemini. So this latest version is 3x faster learning than v4. Keep in mind that Google cloud is an open vendor service. So you will see whatever hardware and software (CUDO) its customers need to run their application. They have a lot of Nvidia hardware already for their customers and well as they use Nvidia in their own AI Hypercomputer.

So this is not a direct competitive threat to Nvidia, rather an all boats will float scenario. IMO nobody will be knocking off Nvidia anytime soon and the competitive threat will be slow coming. My concern is how long the sales acceleration will continue and how to project its growth over the next couple years into stock valuation.



Microsoft has begun what is reportedly a $10 billion supercomputer data center in Mt. Pleasant, WI. They purchased 315 acres, a portion of the 1,200 acres of the ill-fated Foxconn project. The project has begun, as I am aware of contract being let last week for construction materials. Have no data on the technical details.



I read the $10 billion projection for Wisconsin about 2 weeks ago, but could not confirm again tonight. Here are some comments from Fortune on the subject.

The MS WI project is of particular interest since I lived in Mt. Pleasant for 12 years.

"Microsoft and OpenAI have been discussing a project called “Stargate” that would see Microsoft spend $100 billion to build a massive supercomputing cluster to support OpenAI’s future advanced AI models, The Information reported Friday.

To put this in context, Microsoft is known to have spent more than “several hundred million dollars” to build the clusters used to train OpenAI’s current top-of-the-line model GPT-4, which OpenAI CEO Sam Altman also has said cost more than $100 million to train. It’s also known that OpenAI is already training a successor model to GPT-4, likely called GPT-5, on one of Microsoft’s existing data centers. And we know that Microsoft last year broke ground on a new $1 billion data center in Wisconsin that analysts believe is intended to house the chips for training OpenAI’s next-generation models—probably due out in 2025 or 2026. The Information reported that this supercomputing cluster in Wisconsin may eventually cost as much as $10 billion, once the price of specialized Nvidia chips used for AI applications are factored in. So Stargate is anywhere from 10 to 100 times more expensive than any of the data centers Microsoft currently has on the books."

Sorry my initial post may have been misleading on the confirmed size of the WI project.



I know Jensen Huang is selling his book; but, I agree with what he says here.

at GTC IR day: “NVIDIA is a market maker, not share taker. The reason for that is everything we do doesn’t exist when we started doing it. … The takeaway is you can’t create markets without software. It has always been the case. That has never changed. You could build chips to make software run better, but you can’t create a new market without software. What makes NVIDIA unique is that we’re the only chip company I believe that can go create its own market … and notice all the markets we’re creating.”

Muji wrote a new article about Nvidia, titled “Lock-in” at (the actual article is behind the paywall part, sorry.
“NIMs and microservices are not separate product lines or separately sold - they are all part of AI Enterprise v5.0. These modules will help push customers into AIE subscriptions (that recurring revenue) and deepen the lock-in.”

I’m overweight for the long haul, it seems.



This is a panel at GTC, linked here , hosted by Jensen Huang, with 10 of the original luminaries advancing AI, agreeing that the Nvidia GPU is the shape of AI as much as AI is the shape,of the GPU. I found their conversation to have colored in this seemingly off the cuff remark to where It’s now how I see the structure of both in my minds-eye.

My limited understanding of what their talking about is not why I’m invested in Nviddia; but, I also found The article linked in Smorgasboard’s article about this very subject quite compelling…

…”it’s become harder for new companies to enter the market and provide competition, the existing players have consolidated. Years of mergers and acquisitions have remade the chip space – any given product segment only has a few companies left standing. In some cases, there may in fact only be one company who makes a particular part or component. This shift, combined with shifts in demand, have allowed companies to refocus away from rapidly scaling production to focusing on margins, cash flows, and balance sheet health. Capital expenditures and research budgets remain robust, but are not pursued at the expense of profitability. Inventories are more well controlled, order books are deeper, and customer relationships are more robust. In short, the companies have become better stewards of capital and the business models are more insulated than they used to be.”

…”Another way to target the industry is through leaders in specific types of computing. For example, companies like Nvidia who is the undisputed leader in the graphics space. While graphics cards used to be highly cyclical, used primarily for applications like gaming or crypto mining, they are also increasingly being used for AI computing in data centers and in the automotive industry. These are not only massive growth opportunities, they’re also going to become more stable industries – data center growth and refreshment cycles are planned years in advance and the companies ordering the components are significantly less volatile in their purchasing patterns than retail consumers focused on gaming installations.”

…”We could go on and on describing different companies in the ecosystem but the overarching principals are the same. Increasing specialization, increasingly wide barriers to entry, less competition, a better focus on margins – the space is maturing as it increases its (increasingly widespread) strategic (and geo-political) importance.”




I’m not as knowledgeable about chips as other Fools who have replied but no one seems to have mentioned Tesla’s Dojo. With the help of ARM Holdings lots of companies can design purpose built chips for their own use. When the companies have markets as large as Apple and Tesla, that’s a big chunk of the AI chip TAM. Alphabet, Amazon, Apple, Meta, Microsoft, Tesla… At this time demand exceeds supply. What will happen when supply catches up? When will that be?

One of the huge developing markets is AI controlled humanoid robots. Most startups simply don’t have the resources to fund the requisite AI training to compete with the likes of Tesla. Nvidia is providing the training as a service to these startups. How much is that worth? (No idea).

From an investing standpoint I would like a less volatile ride. I’m using ARM and AMD to sell covered calls. ARM and AMD might not be as glamorous as NVDA but they are more diversified. I love ARM’s business model.

Denny Schlesinger


Hi Everyone,
Thanks for your feedback. I think I will stay the course with NVDA and reevaluate after next quarter’s earnings.




James Douma sums up Dojo vs Nvidia pretty well in this video clip (start at 1:50 in):

Basically, Dojo isn’t a general purpose AI chip, it’s a chip Tesla designed to run Tesla specific workflows. And Tesla is still buying lots of Nvidia chips. Douma considers Dojo an “insurance policy” against having a single source. But, almost no company has Tesla’s resources and expertise to replicate that, and even then, the costs to port Nvidia-based AI software to another architecture are large.

Not any time we can see from today. Does anyone think there are going to be fewer AI applications any time soon, or that retraining existing AIs on new data won’t be a constant thing?

As an ex-software programmer and retail electronic purchaser, I too love ARM’s business model. I’m sure Apple, Samsung, and other companies producing their own ARM-based designs love the business model, too.

As an investor, though, I dislike ARM’s business model and always have. Basically, ARM makes its money through licensing fees. Apple and other companies license its designs and then expand on them for their specific use cases. But, ARM makes much less money per chip than companies like Intel and Nvidia that fab their own chips. ARM’s business model hurt Intel, but ARM didn’t capture most of the money previously being paid to Intel.

Ironically, if you want to talk use of ARM-based chips for AI in the data center, Nvidia arguably makes the best ARM-based chips (it’s “Grace” series). So even there, Nvidia makes more money on ARM-based data center usage than ARM does.

That may not make them a good investment. AMD in particular gets a lot of love from the tech community, but it’s still second fiddle to Intel in CPUs and to Nvidia in GPUs. Nvidia is going to make more money on AI networking in one quarter than AMD is going to make on all of its AI in one year.

I cannot see a justifiable reason for AMD’s PE to be way higher than Nvidia’s. But, I admit I don’t know all of AMD’s market segments.


I don’t have any investment in Nvidia. Personally the below things have given me pause

  1. We are currently so early in the ai phase, I believe there is huge scope for optimization both in training and inference algorithms. My biggest concern is somebody somewhere will invent an algorithm that can overnight make training and inference a several orders of magnitude cheaper. Being a developer I have seen this happen repeatedly. It’s not a question of if, but when.
  2. I also believe most inference will happen on the edge devices like iPhones, browsers, etc. Again we have seen this happen repeatedly throughout software history.

These are all deflationary forces which are hard to predict which also makes it hard for me to wrap my head around if Nvidia will be a bigger company in 5/10 years than it is today. Hence I have painfully stayed out of Nvidia till now.


Absolutely. That doesn’t mean you don’t still need to train in large data centers processing lots of data.


Smorg if they are training it in large data centers, do those centers need to be close to the devices? Or could they be trained and then passed to other centers close to the devices for inferencing?



The results of training are the parameter values, which can be exported anywhere that has the storage capacity. Such models are smaller than the size of the data on which they were trained.

Easiest example is a Tesla vehicle driving on FSD many miles from Tesla’s data centers, with no cell connection active.


I should have added, “IN THIS THREAD UP TO MY POST.”

Denny Schlesinger


Then why don’t they put the data centers in Prudhoe bay where it would be cold so they wouldn’t need as much electricity, and build electrical plants that can run on the plentiful natural gas?



I hear this a lot and am watching Groq closely, they are often mention as the next much better than Nvidia company. But, even those that are selling the Groq book admit that they are years away from scaling up.

Related to advancements in algorithms, chip design is changing.

Anyone have Groq as being able to scale up?


Groq is covered in this (imo) super-awesome article.
The author makes some interesting arguments, including

  1. Memory is emerging as the biggest bottleneck as the number of parameters in training models increases exponentially
  2. GPUs actually weren’t designed from-the-ground-up to do LLM, and thus have baked-in architectural shortcomings at the memory and network layer
  3. Upstarts like SambaNova and Cerebras are designing chips from-the-ground-up with an architecture that natively handles LLM processing much more efficiently

My takeaway: I’m long $NVDA and holding, but it’ll be interesting to monitor these upstarts. Also, unlikely as it may seem now, I think eventually AI chips may eventually become the commodity the market wants them to be. Whether that happens in 5 years or 5 decades, who knows.

“As GPUs weren’t originally designed for training LLMs, there is the possibility of radical improvement in chip design and some candidates seem to have embarked on a few promising approaches already. But to conquer the training market they also have to simultaneously offer solutions for networking and software, which makes the barrier that much higher.”

“The approaches to keep an eye on are from SambaNova and Cerebras. Both have ground-up system designs unincumbered by legacy limitations and they seem to produce order of magnitude improvements.”


There actually is at least one smaller datacenter provider that places their datacenters in places where energy is super-cheap (for example, where oil production generates natural gas that would otherwise be flared, or where there is ample geothermal energy available). That tends to work well for computing loads that don’t need much human interaction or fast response times – and those datacenters are typically smaller than would be built elsewhere.

It’s hard to grow that kind of business approach into a datacenter provider of the kind of really large scale that you see with a hyperscaler like Google or a datacenter REIT like EQIX. Conversely, it’s hard to see a player like those two, operating at a very large scale, becoming interested in chasing after the opportunities that a smaller player can pursue.