Nvidia - Semiconductors and AI

Given the high interest in Nvidia, I want to share my perspective. Although I don’t hold a position in Nvidia, I work at the intersection of semiconductors and AI and felt it was important to contribute to this topic which is of interest to the board.

In my view, Nvidia is not in the same realm of companies as Apple, Meta or Google. These companies have established deep competitive moats and possess unique attributes that underpin their dominant market positions. Few areas to consider when evaluating Nvidia

Hardware - the phrase “Capitalism is brutal” must have been coined with hardware businesses in mind. They eventually get commoditized and competition is fierce.

Competition - AMD offers similar GPUs, and Intel’s Gaudi 3 is competitive in terms of price/performance. Significant investments are being made in this space by various companies, intensifying the competitive landscape​.

Deep pocketed customers - While CUDA provides Nvidia with a competitive moat, the high price points of their products are a concern. Major customers, who often have substantial software expertise, may seek to bypass CUDA or support alternatives from AMD, Intel, or others to reduce costs. The high gross margins (75%) mean that customers are incentivized to introduce more competition to drive prices down​.

Training vs. Inference - Nvidia excels in training AI models, which is currently a major focus. However, training occurs less frequently compared to inference, which happens continuously and at a much larger scale. For inference tasks, high-powered GPUs like those from Nvidia may not be necessary, and less expensive solutions could be sufficient.

To be clear, Nvidia had a great strategy, executed flawlessly and deserve everything that they have got. It’s my opinion that it’ll be hard for them to maintain high market share AND have 75% gross margins. They are just currently priced like they can have both. If you have a high Nvidia position, I recommend thoroughly understanding their technical strengths and closely monitoring what the competition is developing.

For those who think this time is different, remember Cisco. Twenty-five years ago, Cisco was the most valued company in the world, hailed as the “Backbone of the Internet” (sound familiar!). In 2000, Cisco’s market cap was around $500 billion, but today it stands at roughly one-third of that. Why? Hardware gets commoditized. It’s as simple as that.

45 Likes

These are all fair points to consider, but I don’t view them as conclusively indicative that Nvidia is going the way of Cisco.

  1. Hardware
    Unlike Cisco’s CEO John Cambers, Huang has consistently embraced software. Whereas Cisco lost engineers frustrated with Cisco’s efforts on Software Defined Networking (SDN), and so they left to found a new company, Arista, Nvidia was early with CUDA, and today offers a variety of cloud and subscription computing models. How much those will grow remains a question, of course, but the comparison with Cisco just isn’t apt in my view. Plus, the complexity of what Nvidia is doing is far above what Cisco ever had.

  2. Competition & Deep-pocketed customers
    While companies are trying to compete with Nvidia, from chip-makers like Intel and AMD to compute giants like Google and Microsoft (and throw Tesla in there, too), Nvidia is far and way not only the best/fastest, but the best price/performance ratio. Last year AMD and Intel announced new chips that they claimed rivaled Nvidia’s Hopper, but a quarter later Nvidia announced Blackwell will be shipping next quarter, which is a big step above anything anyone else has - and the price increase for Blackwell is much smaller than the performance boost.

  3. Training vs Inference
    Huang recently estimated that Nvidia sales are 60% Training and 40% Inference, so it’s not like Nvidia isn’t getting a share of the inference pie.
    And we have to remember that for Generative AI, inference isn’t what it used to be. Huang points out that previously the problem was recognizing whether an image contained a cat or not, but with Generative AI, the problem is generating an image of cat. That’s a much harder problem, requiring far more compute.
    And even in non-generative AI, with the trend towards inference use/answers feeding back into the AI models, there’s still debate over whether edge-based inference will be as prevalent as some estimate.

My feeling is that it’s not that “this time is different,” it’s that Nvidia is different than Cisco and Nvidia’s CEO Huang is different than Cisco’s Chambers. Huang saw that Moore’s Law getting harder meant a disaggregation of compute and spent almost all of its cash on hand to buy Mellanox back in 2019/2020. Pretty savvy (beating out Intel and others), and today Nvidia gets more than 4X revenue from AI Networking than AMD gets from all of its AI products.

Yes, at some point AI hardware will be commoditized. But, that doesn’t look to be happening this year or next.

I think the real question for Nvidia is around expectations for growth from here. Next quarter starts the ChatGPT acceleration lap that will hurt YoY numbers, and even if demand were growing exponentially, chip production is not. NVDA is currently priced around 39X forward earnings, so the stock is no longer that bargain it was a couple quarters ago. And then there’s the law of large numbers - Nvidia is 3rd largest company in the S&P 500. Another double in the next 12 months seems unlikely and investors are probably getting skittish. Many analysts have been trimming NVDA while it’s been growing - I would expect that trimming to accelerate.

61 Likes

There are several other folks on the board who are from the chip industry who could probably weigh in on this but 2 points come to mind that suggest that semiconductors and microprocessors are not equivalent to the hardware comparison.

  1. Semiconductors actually predate computing and certainly networking infrastructure hardware (e.g. Cisco). If they were going to get commoditised to the same degree it would have happened. Instead we have lived in a duopoly of CPU players and a duopoly of GPU players and and an oligopoly of mobile RISC chip players since inception.

  2. Presumably one explanation for this is that the capital intensity ($ billions) of getting into the chip industry is prohibitive. The only real challenge to this has been the arrival of fabless IP based business models such as ARM.

Personally as a former holder of AMD and ARM, I feared technology obsolescence and competitor innovation as well as the cyclical nature of the industry more than commoditisation.

Ant

18 Likes

Smorg, are you hinting that you will be one of the people trimming his NVDA position ?
Saul

12 Likes

No, I was not hinting at that, actually trying to point out that WS analysts have been getting Nvidia wrong even after the ChatGPT moment. They’re trained (pun intended) to sell after any run-up, whether deserved or not, and some think AI is just a phase like crypto, instead of a sea change.

I’m still bullish on Nvidia’s technology, market position, product roadmap, and leadership, but I gotta admit that it’s hard to see a $2.5 Trillion market cap doubling in even two years. That said, I don’t know where else to put this money right now.

Some of the points discussed in this thread (both pro and con) are discussed in this recent video:

(start at 25:35 if the link doesn’t take you there).

BTW, in terms of competition, I think there are only perhaps a couple of private companies, like Cerebras, who might be able to challenge Nvidia in the future. But, their technology isn’t proven and there is only prototype software running on this early hardware. Cerebras is using a SOW process (System on Wafer) that today is from TSMC, (which Tesla is using, too). Read more about that here:

But, this new tech, even if it proves to be better, is still years away. Cerebras recently announced their 3rd generation product, which is useful for training only (they use Qualcomm for inference chips). You can read more on Cerebras here:

Here’s a caveat from the ZDNet article:

To be clear, Cerebras is making this comparison using a synthetic large language model that is not actually trained. It’s merely a demonstration of WSE-3’s compute capability.

26 Likes

Also to be clear to others, the SOW technology is not in any way that of Cerebras. This is TSMC technology and NVDA has been involved since day 1 (and others).

9 Likes

I believe the Anandtech article I linked makes that clear.

From the article:

TSMC has been offering its System-on-Wafer integration technology, InFO-SoW, since 2020. For now, only Cerebras and Tesla have developed wafer scale processor designs using it, as while they have fantastic performance and power efficiency, wafer-scale processors are extremely complex to develop and produce.

If you’ve got links describing Nvidia’s involvement, I’d love to read up on it, as I couldn’t find anything via a quick web search.

At the risk of diving into the tech weeds:

A chip-on-wafer version using CoWoS technology is expected to arrive in 2027, and will enable the “integration of SoIC, HBM and other components to create a powerful wafer-level system with computing power comparable to a data center server rack, or even an entire server.“…
TSMC’s ambitious pursuit to create gigantic chips, however, is dwarfed by Cerebras Systems’ newest Wafer Scale Engine 3 (WSE-3) … This new chip created on a 5nm TSMC process, provides a staggering peak AI performance of 125 petaflops – which is equivalent to 62 Nvidia H100 GPUs.

Back to Nvidia, @This2ShallPass 's view of competition is correct, as these advances show there is quite a bit of research and development on new chips that might better serve AI training needs (and separately on inference, too), but my view remains that such efforts are too far away and too uncertain to be a reason to not invest in Nvidia - I certainly don’t see them as a reason to sell today.

26 Likes

Few points in response to the above posts.

I want to reiterate my main point, it’s going to be harder for Nvidia to keep their market share AND margins. No doubt they will continue to be a dominant player in this space, but in my opinion they won’t be able to keep both (for reasons I mentioned in my initial post).

@Smorgasbord1 I just used the Cisco analogy to show the recent example of exuberance in things that are “backbone” of any new technology change - Internet or GenAI. Not saying Nvidia will follow the same path of Cisco (SDN vs. embracing CUDA earlier). Software defined networking was how competition came at Cisco (and as the world move towards Marc Andreessen’s prescient prediction back in 2011 that Software is eating the world).

Actually, this might be a good topic for discussion in the board. Might help us find what the next SaaS is going to be. With every new technology revolution - who gets the spoils? Has there ever been a case where an enabler was able to garner most of the value? It always amazed me that ISPs were relegated to just monthly fees and heavy competition while Google and Meta took most of the value from Internet. But maybe there are other examples from previous technology transitions that show this is not always the case. I don’t know.

On competition, it’s highly unlikely a startup would be able to disrupt Nvidia. It’s much harder and takes way longer in the semi space. Unlike software, moats here tend be much larger and cost to build a semi chip is huge. AMD and Nvidia have been building similar GPUs for the last 20-30 years. AMD has a very similar offering and Intel Gaudi 3 while not direct comparison seems to offer promise.

On my comment about deep pocketed customers, you can guarantee Google, Microsoft, Meta and Amazon have large teams working on how to use AMD’s GPU to get similar performance. Also, AMD is working on software similar to CUDA. Software abstraction can happen in any layer and these customers know their training workloads and can optimize just for their own needs.

All that’s needed is for competition to take 10-20% market share and suddenly Nvidia’s prices have to come down significantly. This is key in my mind and something Nvidia owners should think about.

15 Likes

I think what you are missing is several points:

  1. We are at the BEGINNING of the AI revolution. Year 1.5. Whatever you worry about happening, is still far away. Everything we hear is that these chips are consumed as fast as they come out of the factory, and the early ones will be obsolete in a few years.

  2. Huang is WICKED SMART. LIke IMHO light years ahead of every other CEO on the planet. He saw this coming 10 years ago. He sees the need not to be JUST in chips. CUDA is not just to keep people locked into his chips. His little “inference modules” or whatever he calls them - snippets that enable you to do AI functionality that you can embed into your offering - is brilliant. Of course open source may compete but he will always be one step ahead. He is already in markets yet to rise. He is that kind of CEO.

You cannot predict where “the next SaaS” is going to be. It’s going to come out in the winners as they win. LLM chat bots may not even play in the big-money game. ML is equally important or more so. APP might be a winner with machine learning to improve advertising, hopeful there, a little too early to tell. Surely some companies will win in drug / diagnosis related AI. AI/ML will pop up EVERYWHERE and the winners will not be concentrated in one place. We are too early in the cycle to tell and the winners will become clear as they win and not before. Remember sexy does not necessarily lead to revenues and profits.

Why do you think that? What’s needed is BETTER chips or at least ON PAR chips at cheaper costs to eat into NVDA’s profits. For sure competition WILL take 10-20% or market share, maybe more, but if NVDA has got the best product, it will not impact prices or margins if enough companies/nations want that product.

Bottom line: I count among my biggest mistakes not investing in ZOOM when it was obvious it was rising because I saw past its adoption and then thought, what’s next? Where are the adjacencies? I was right long term but I left tons and tons of money off the table short term.

To worry about what will happen N years out and leave this massive shift off the table, well, that’s your business. But don’t rule out Huang continuing to skate well ahead of the puck for several years to come, and that’s all I need to stay in for now.

43 Likes

I think there is. AAPL got most of the profits in the smartphone market. NVDA is getting most of the profits in chips for AI. NVDA is innovating too fast for anyone to catch them plus NVDA has multiple advantages that need to be addressed simultaneously by a competitor. Top end product is H100 so a competitor need to be better than that by 10x today. In 3 months NVDA has the H200 then in 6 months the B100 then in 12 months the B200 then in 20 months the R100 and then after that the R200. They will keep advancing not just compute capacity and speed but also reduce energy consumption. And software. And the number of developers using the technology. And the relationships with manufacturing partners. I’d say it’s almost impossible for anyone to compete within the next 3 years. Even if we see a giant leap forward such as a quantum computer, it still requires time, billion of dollars in capital investment to get a working prototype. Then it takes time to scale up that new technology. AMD has been trying to compete with NVDA for many years and they get some scraps and not nearly the profits that NVDA gets.

GauchoRico

46 Likes

Quantum computers may not be in direct competition with Nvidia even when they are generally available. Having been briefly invested in IonQ I did a bit of studying about these machines. The point is that quantum computers are not head to head competitors of traditional computers (CPU, GPU or TPU architectures).

For sure, quantum computing will provide a giant leap forward when it comes to solving certain types of very complex problems. I’m not well enough versed in the field to assert one way or the other if quantum computers will lend themselves to AI use cases (LLM, generative AI, ML). I’m just saying that Nvidia’s GPU advances in compute power and cost of ownership may still exceed that which might be available from quantum computers - at least the first two or three generations of commercially available machines.

6 Likes

Innovation in this space is also limited by the capacity of the semi supply chain. I’ve heard that NVIDIA has booked TSMC through 2027 with fully paid for orders.

Even if you have a better design, TPU, NPU, whatever, you still need to secure advanced chip manufacturing and packaging capacity, which is controlled by TSMC today as Samsung and Intel are quite behind. It’s hard to see how much competition can there be from smaller players. The limit to the investment upside, in my opinion, is how much capex money the world has and is willing to throw at this AI development and not from competitors in the near future.

18 Likes

Jensen Huang, CEO/Founder of Nvidia giving keynote at Computex 2024, explaining ‘the treasure of Nvidia’. I know Smorgasboard has repeatedly said that the Cuda Libraries are Nvidia’s moat; but until Jensen’s explanation here at 29.35, I didn’t understand what Smorgasboard was saying.

At least one of the CUDA libraries (350 of them) are needed to make (fully) Accelerated computing possible.

Thanks to all the wonderful contributions here. It just keeps getting better.

Best

Jason

17 Likes

Good discussion. I was clear I don’t have a position and my post was only to share my thoughts as someone in the field. I get a lot from this board and wanted to contribute on a topic that I have some knowledge about.

I would just say a couple more things for longs to think about. Good luck to you all.

Huang has been leading Nvidia for 30+ years. What he has done in the last 5 years is nothing short of amazing, full credit to him. But, to say he’s light years ahead of everyone else shows a lot of exuberance that’s not grounded in facts. Was that genius just sleeping 25 of the last 30 years? Again I’m not criticizing him or anything. You have to remember AMD and Nvidia have been making similar products for many many years. This is not a company that came up with something brand new that will take competition years to catchup.

Pricing and margins - AMD products cost only ~25% of Nvidia’s. Yes, they’re enjoying that monopoly at this point. If AMD manages to get 10+% more share, though that doesn’t sound a lot, it will put a lot of pressure on margins. If you don’t agree, look at Intel and AMD before 2018. AMD barely held on to 20% market share in many end markets and Intel had the lion’s share. Still all their end customers wanted AMD to be alive, they made sure of that and that meant Intel’s margins though very good was never exorbitant.

Semi industry - maybe I’m jaded being in this industry. But, it’s highly competitive and very few players enjoy sheer dominance. 70%+ margins are unheard of.

18 Likes

I guess the history of AI and Nvidia’s role in its development aren’t well known, even by those in the industry. Nvidia’s AI dominance isn’t some sudden flash in the pan.

Arguably, the Deep Learning Revolution began 14 years ago, when Professor Geoffrey Hinton and some grad students won the 2012 ImageNet competition with AlexNet, achieving an image-recognition accuracy that had never been seen before, using a new kind of Neural Net technology, built on Nvidia hardware.

According to Hinton, AlexNet would not have happened without Nvidia. Thanks to their parallel processing capabilities supported by thousands of computing cores, Nvidia’s GPUs — which were created in 1999 for ultrafast 3D graphics in PC video games, but had begun to be optimized for general computing operations — turned out to be perfect for running deep learning algorithms.

“In 2009, I remember giving a talk at NIPS [now NeurIPS] where I told about 1,000 researchers they should all buy GPUs because GPUs are going to be the future of machine learning,” Hinton told VentureBeat last fall.

(From How Nvidia dominated AI — and plans to keep it that way as generative AI explodes | VentureBeat )

It was clear parallel processing was the path forward for AI. So, why didn’t Intel and AMD invest in GPUs and AI a decade ago? After all, Hinton told everyone a decade and a half ago that GPUs were the key. Was Huang a genius, or was everyone else not paying attention, or just not interested?

Nvidia was pushing non-graphics compute on GPUs well before 2012, releasing the first version of CUDA in 2007, which is what enabled researchers to program the GPUs for AI computing. This is Nvidia realizing that their GPUs could do more than just video games, and enabling that to happen. In 2007, with development on that started sooner, of course.

.

Nvidia is exactly a company that came up with something brand new that is taking the competition years to catch up. Meanwhile, Nvidia keeps pumping out innovations and new products.

Intel remains in world of hurt, doing both design and fabrication. AMD caught Intel flat-footed in the x86 world, but that’s so last couple of decades. ARM eat Intel’s lunch in the mobile space, and AMD doesn’t play in the ARM stadium.

This isn’t just who has the fastest chip. Nvidia mostly sells CPUs on boards or in modules (up to 8 per “tray”) that use NVLink for fast interconnection between them. From the article above:

“While other players offer chips and/or systems, Nvidia has built a strong ecosystem that includes the chips, associated hardware and a full stable of software and development systems that are optimized for their chips and systems,” analyst Jack Gold wrote for VentureBeat last September.

But, even on the basis of chips, Intel and AMD are just now shipping units they claim are as fast as Nvidia’s Hopper (which Nvidia disputes). In a few months, Nvidia will be shipping Blackwell, which is 4X faster (30X in the ARM-pairing “Grace Blackwell” configuration, while using ¼ the power). And when will the next generation of AMD and Intel AI chips come to market? It’s been a two year cycle at best for them, while Nvidia is now on a one year cycle.

One thing that’s different today than the x86 cycles of last century is that speed matters. People were willing to save money on home or even business PCs that had AMD processors even if they were slower. But, with AI the time to market isn’t just computer scientists writing code and compiling, then testing, there’s a whole cycle of training. How long does it take to feed the entire internet into your NN (or transformer, to keep things current), then test the results? Tesla has been spending Billions of dollars on Nvidia chips, and has just recently not been “compute bound” as Musk called it, training on who knows how many miles of car camera data.

Jensen Huang on the Nvidia ER call:
“Let me give you an example of time being really valuable, why this idea of standing up a data center instantaneously is so valuable and getting this thing called time to train is so valuable. The reason for that is because the next company who reaches the next major plateau gets to announce a groundbreaking AI. And the second one after that gets to announce something that’s 0.3% better. And so the question is, do you want to be repeatedly the company delivering groundbreaking AI or the company delivering 0.3% better?”

@ThisShallPass is correct that competition is going to try really hard to to do the “your margin is my opportunity” thing that happens when one company dominates so lucratively. The problem appears to be that this is currently a space where even the fastest isn’t fast enough, and getting your completed trained model out there first is important in terms of recognition and therefore revenue. So, the days when AMD could make a good business with an x86 chip that was 85% as fast as Intel but 25% of the cost, don’t appear to be mapping to today’s AI world. At least not yet.

55 Likes

I’ll throw it out there bc nobody has.

INTC and MSFT in the 1980s and 1990s. It’s ancient history I suppose, and it’s easy to dismiss INTC as a lumbering shell of itself. But the run from the mid-1980s to Y2K was phenomenal. Eventually the x86 was commoditized, but INTC banked a lot of coin before it was commoditized.

(I preface everything with the qualification that I’m not a tech expert by any means. But I did live through a largish portion of the last century :grinning:)

It seems to me that a parallel with the INTC/x86 perfect storm isn’t completely present for NVDA. INTC benefitted first from IBM adoption of the 8086 for its PCs and later by MSFT developing Windows specifically for the x86. The explosion of software development that followed was eye-opening for virtually everyone who lived through that era. And a cooperative pattern of obsolescence occurred as each new chip bred new software that eventually sunsetted that chip. Industry standardization on the x86 architecture (and Windows for that matter) was really the key factor for the success of both companies. That cooperation between the two giants kept competition at bay.

It’s not clear to me whether that kind of situation is setting up. Is it possible for NVDA to develop along those lines with CUDA? Is it possible for MSFT (NVDA or someone else) to come along and adapt or expand on CUDA to produce an OS that is as disruptive for AI as Windows was for the PC market of the 1990s?

Without that, I’m unsure that NVDA can carve out a niche as deep as INTC did in the 20th century CPU market. Market best isn’t as deep a moat as the industry standard.

This thread has been very interesting to me.

Peter

18 Likes

@This2ShallPass thanks for starting this thread with a great post.

I am curious about who will use the chips to build AI and dominate.

Obviously, Tesla and Meta come to mind immediately, but next are also AMZN, GOOG, MSFT. And AAPL better be working on their own mega-move.

Clearly, very few companies have what it takes, from the data, to the people, to the cash to go big on AI. Many have been using AI for very specific applications, but that’s different. I am thinking applications on a massive and consumer-facing scale like a Tesla car, a humanoid robot, generative AI models, enterprise-level models such as predictive maintenance models, etc.

Are there any other obvious publicly traded companies that are publicly known to be big NVDA customers at present?

What do others think about who, aside from the obvious Big Tech players, has a serious shot to be a dominant player at scale?

It seems to me that contrary to all the talk we do not yet have any idea what domination and concentration can really mean in an age in which but a handful of players are the only ones capable of fully and swiftly exploiting the potential of the next wave of transformative tech innovation.

I am personally rethinking the whole idea that smaller cap software companies can return nearly as much as the behemoths, especially on risk adjusted basis, over the next few years. Jamin Ball’s updates on Q1 and today on IT spend have been particularly gut wrenching (though I left most of SaaS back in Jan 2023, I still follow the sector picture).

8 Likes

Yeh - I read that too and felt it was an interesting hypothesis wrt bifurcation between data/AI vs non data/AI tech spend however apart from Nvidia and SMCI, actually some very clear recipients of data/AI tech prioritised spending such as Snowflake and MongoDB are really in the dog house and not roaring away with tech spend at the expense of the rest of SaaS. On the other hand Monday and Cybersecurity still seem to be getting good growth in spend and have guided acceptably.

Ant

13 Likes

These big guys are going huge on AI chips for two reasons: 1) They want to produce a LLM solution that will win and 2) for AMZN, MSFT, GOOG they want to have AI GPUs to rent out to all the companies that are going to be doing AI, some of which will be winners.

To say only big guys will be able to “dominate” will leave huge opportunities off the table. And to say it has to be consumer-focused is also leaving a lot of opportunities off the table.

You want to predict the big winners but you simply cannot yet. It would be like trying to predict myspace / facebook before they came up, or AMZN or Salesforce or whatever. They will appear. But we don’t have a crystal ball into innovation.

Obviously AMZN MSFT and to a lesser extent GOOG will see their cloud revenues go up from providing AWS AZURE etc services. But I don’t know if that will be the “big win” as they are already so huge.

The little winners that as you describe as using AI for “specific applications” will provide plenty of investment opportunities. We need to keep our eyes and ears open for them.

For example:

  1. AppLovin’ is a great example of potential AI benefits. Let’s say they are right that their newest release is so much better at predicting the success of an ad, so much more so that a dollar spent brings more than a dollar earned. (I say “let’s say” because I want to see this play out over more than one quarter, in case new data proves this false. In any case, AI will eventually accomplish this at some company.)

At that point, if you want to advertise, won’t you spend your $1 to get back more than $1 and then won’t you spend $2 and $3 and etc? And sure there will be competition but why would I move to another platform unless its significantly better and if it is, then that new platform will be the winner, right? It’s like Huang says, do you want to be the first one to get it right or do you want to be the one who gets it 3% better?

  1. Right now, if you have a diagnostic X ray, you rely on a radiologist to look at the X-ray and say what’s up. This relies on the skill and experience of the radiologist, and missing something small can be the difference between life and death. Literally.

There are companies racing right now to build AI diagnostic tools to look at these x-rays and provide diagnosis. Here is an example where I would much rather have an AI bot that has been fed tens of thousands of X-rays with small subtleties interpret my X-ray than some random radiologist, the skill level of which I have no clue.

If a company can say they diagnose cancer properly 99% of the time and have accuracy 20% greater than the average radiologist, and they provide a SaaS model for this, don’t you think they are going to do very very well? I do.

So what will every AI company have in common? The need to train on lots of data. And that data has to live somewhere - either in the cloud or if there are privacy concerns, on prem.

Sure, AMZN and MSFT will win because they will provide a place to host all this data, and so will other companies that host or provide a solution on-prem.

But if you go back a level, right now the best seats in the house still belong to NVDA SMCI and others directly supplying the chips and racks to every company that wants to use AI to create a better solution. That is where the domination and concentration is right now. Trying to see past that is futile, and also missing the opportunity smacking you in the face.

It’s hard to be patient but that’s what it takes. Right now the money is in the buildout. Sit back and enjoy the show, and the winners will appear.

52 Likes

It’s really difficult to predict which entities will dominate via AI applications, but something to consider is that there are new players altogether. Sovereign data centers have not been a particularly big thing in the past, but now there are several nations employing sovereign wealth in order to create enormous AI data centers. These entities were not even players in the past.

Another thing to consider, if you go back to Nvidia’s most recent CC Jensen asserted that automotive appears to be their dominant vertical, at least for now. Automotive design centers may have purchased a lot of Nvidia graphics processors in the past for 3D modeling and other Catia applications (interference analysis, serviceability, etc.), but they are building up AI capabilities for all new applications with respect to drive train, suspension, EV design, etc.

I’m confident that there is a whole raft of additional commercial AI applications that are not B2C.

@mizzmonika mentioned potential applications in health care with respect to X-ray analysis. This only scratches the surface of medtech AI applications that will be developed from new drug discovery to new diagnostic tools and probably even treatment involving new and improved devices and therapies.

I could go on, but it doesn’t take a lot of imagination to think of amazing new applications that will be facilitated via AI applications. I can’t predict how long Nvidia’s current sales run will last, but IMO there’s a lot of gas left in the tank. Are there competitive threats on the horizon? Certainly AMD and Intel and maybe others are nipping at their heels, but it seems that Nvidia’s pace of new development will keep these companies at their heels for quite some time to come. Will there be a point where there is not much to be gained from the next generation of chip development? That’s a hard call, but if history serves as a guide, it seems the answer is no. AI will itself greatly improve the design and fabrication of new AI chips.

OK, I admit that the picture I’ve just drawn is too rosey. Nvidia is in a cyclical, feast and famine business. That has probably not changed. I don’t think we can yet discern the dynamics that will stem the tide, but it’s hard to believe that this business will just keep growing, especially at the current torrid pace. If anyone has a good idea of what the tell is for a slowdown before sales drop off, please chime in.

20 Likes