<<<AMD (AMD) and Nvidia (NVDA) discrete GPUs are also accommodated by the Intel photonic fabric. The recent Intel-AMD collaboration in mobile might be a clever way for them to disguise this bigger project.>>>
Seems the GPU is part of this vision from Intel. Thus not an issue with Nvidia. From what I read Intel has given up on its own iGPU or something like that when it teamed up with AMD. Thus it is still Nvidia vs. AMD in regard. From everything reported Nvidia has a larger lead on AMD in GPUs (at least for any highly profitable and desirable space for them) than Intel has an advantage in CPUs over AMD.
The one current technology that may be material to Nvidia is the Google Tensor.
There is nothing proprietary about Google’s tensor, as they made the specs public (albeit, they have not made their next generation tensor public).
What makes it material is that Google has said that the next generation of tensor should be able to do machine learning as well. That would be a challenge to Nvidia’s core of machine learning, but also Nvidia is making moves to dominate the executing of the machine learning programs, that tensor does for Google (at least in some circumstances).
That is the one technology worth watching at the moment I think in regard to Nvidia. Will the tensor be both materially less expensive, less energy intensive, and greater performance than the Nvidia GPU? Google also has an AI infrastructure in place that could expand and challenge Nvidia.
However, Google is one of the prime players in AI and will be competing against many of its potential customers from IBM to Baidu and many companies in-between. Will these companies really want to buy tensors that benefit Google?
Further, to date, although Google gave away the specs, I have not heard of any news that companies are using the Google specs to in-source their own chips. We don’t know what Amazon and Microsoft and IBM are doing necessarily with their data centers as they could in-source in a fabless method. However, given how important scale is in semiconductor production, I doubt even Amazon could productively in-source in a fabless manner and compete with the scale of Nvidia GPUs.
Further, Amazon customers want to use CUDA, or use Google’s AI language, or the open source language that Nvidia maintains. Using an in-source chip may alienate part of their customer base.
Anyways, replacing Nvidia GPUs is not a simple thing, which is why the replacement technology needs to be materially better and then some.
Tinker