An IBD article on Nvidia. Sigh, is now ranked #8 in IBD. Mores the shame. The article names some ASICs that want to compete for Nvidia’s customers in certain aspects of AI, primarily deep learning (which is the largest segment presently to my understanding). The article goes on to say, basically, from this analyst, that no other option is likely to unseat NVDA given their whole product, product road map, software engineers and lead, and CUDA.
That the AI market is a multi-decade product cycle.
What the analyst does not mention, or it does not make it into the article, is that the GPU is always a moving target. So not only does the competing technology has to be better than what Nvidia offers (or otherwise, why leave the industry standard) and it has to stay materially better to go through the trouble of adopting a new standard.
Something we have talked about many times. Where to look for competition is finding a function or market that Nvidia is underserving, that the new ASIC can better serve, and then build out from there.
To date the only such chip is Google’s tensor flow. And Google is providing it as a service in its cloud, and not selling the chips. Google seems to be doing a good job of creating the infrastructure around it as well.
However, as we discussed earlier (probably on NPI as it went technically in-depth, I was in one of those moods) that Google was playing with its performance figures and was not comparing apples to apples with Nvidia. That Google’s processor was at least 30% more expensive per unit of processing (whichever unit they used).
Thus Nvidia’s latest top line AI GPU is superior to the latest Google GPU, at least for most things. It was quite technical as to when a tensor becomes superior (and nvidia includes a tensor on its latest chip) and it is not for all applications.
Anyways, good discussion and update on Nvidia.