NVDA: One analyst says game over

https://www.barrons.com/articles/nvidia-one-analyst-thinks-i…

This one is an interesting read. I think Tinker said it a while back: CUDA (software) is giving NVDA a huge advantage

Here are some interesting quotes from the above Forbes article:

  1. “What Nvidia did with their announcements last week was to cause everyone, including Intel (INTC), but also startups, to re-examine their roadmaps,” says Hans Mosesmann of Rosenblatt Securities.

  2. quote from the analyst: [CEO Hwang] is very clever in that he sets the level of performance that is near impossible for people to keep up with. It’s classic Nvidia — they go to the limits of what they can possibly do in terms of process and systems that integrate memory and clever switch technology and software and they go at a pace that makes it impossible at this stage of the game for anyone to compete.

Everyone has to ask, Where do I need to be in process technology and in performance to be competitive with Nvidia in 2019. And do I have a follow-on product in 2020? That’s tough enough. Add to that the problem of compatibility you will have to have with 10 to 20 frameworks [for machine learning.] The only reason Nvidia has such an advantage is that they made the investment in CUDA [Nvidia’s software tools].

A lot of the announcements at GTC were not about silicon, they were about a platform. It was about things such as taking memory [chips] and putting it on top of Volta [Nvidia’s processor], and adding to that a switch function. They are taking the game to a higher level, and probably hurting some of the system-level guys. Jen-Hsun is making it a bigger game.

  1. An immediate result, Mosesmann believes, is that a lot of A.I. chip startups, companies that include Graphcore and Cerebras, are going to have a very hard time keeping up.

“He’s destroying these companies,” says Mosesmann of the young A.I. hopefuls. “These private companies have to go back and get another $50 million [of funding]."

I think that Mosesmann makes a very good case and I agree that his predictions are quite likely to come true.

Chris

37 Likes

This may have been written about, but they have done some cool things to win via creating standards:

https://www.google.com/amp/s/www.fool.com/amp/investing/2018…

https://www.google.com/amp/s/www.barrons.com/amp/articles/a-…

https://www.prnewswire.com/news-releases/next-generation-sma…

https://www.google.com/amp/s/www.barrons.com/amp/articles/nv…

2 Likes

It almost seems as if this breather with NVDA’s share price amidst the screaming signals that NVDA is likely to continue dominating and IS NOT a (mere) chip company, is a gift for folks to buy here right before the deadline to contribute 2017 funds to an IRA.

I’m thinking tomorrow I need to finally get an IRA started…and once opened and funded with 2017 money, I may look at some medium term NVDA call options.

I’m currently doing a deep-dive on Nvidia, trying to get my head around all the buzzwords flying around.

I’m not sure CUDA is as big of an advantage as I first thought. It certainly seems to be the standard because of a) the performance of Nvidia chips, and b) the effort Nvidia made with the CUDA software.

OpenCL however, is a viable alternative. Its not as widespread as CUDA, but more and more of the big AI toolkits have support for it.

I think CUDA is still the standard now, but I’m not sure it will matter much over the medium-term as your APIs become higher level. You learn the API (for AI: Torch, TensorFlow etc), and don’t care at all about CUDA or OpenCL. The CUDA/OpenCL question is only relevant if you’re actually custom coding for the GPU (which admittedly companies like Adobe are).

The question I have is: whats the impact of Tensor Processing Units created by Google? How much of an impact will that have on Nvidia’s datacenter growth. How much of that datacenter growth is driven by AI (which TPUs are designed for) and how much of it is other applications (virtualised GPUs etc)?

Its early days in my deep dive, but Nvidia seems to be in the sweet spot of an insatiable demand for more and more computing power. GPUs are taking up a lot of the slack that CPUs are dropping, and Nvidia is in the right place at the right time.

cheers
Greg

3 Likes

It’s not the hardware, it’s not the software… It’s “Switching Costs!”

Why can we bring all businesses down to Financial Statements? Because ultimately all business is about money, about wealth creation. With whatever NVDA created the moat, we can bring it down to dollars and cents, to the cost of doing something else, the cost of switching to a new standard, to a new chip, to a new concept, to a new software. This is why there is first mover advantage, path dependence. For something new or different to break in it has to have something really special, “disruption.”

Why did Pure Storage chose Nvidia and Arista Networks to build AIRI? They sure know more about that stuff than we ever will. Pure Storage is betting that Nvidia and Arista Networks are the industry standards. The AI ecosystem is building deeper and wider moats!

The importance of Switching Costs for investors is that we can apply that concept to any business just like we can use Financial Statements for any business. What keeps people loyal to Coke? Brand name? There is little cost in switching to Pepsi. What kept users loyal to Windows? Having to learn a different user interface, having to use different file formats. Mac had to become disruptive to make people switch. It was probably the iPhone that did it, a kind of back door attack on Windows. For years people were saying that Intel would eat ARM Holdings’ lunch. It never happened because of switching costs. Once mobile was locked into the ARM architecture Intel had to produce something disruptive, not something just as good or a bit better. That is a huge hurdle.

Denny Schlesinger

20 Likes

It certainly doesn’t seem dumb to choose Nvidia and Arista. It’s just a matter of figuring out what’s too much to pay for them.