I hold AMD but this article here has AMD as a sell…doc
This article looks like auto-generated content. Moreover, it’s entirely based on earnings estimates for the upcoming quarter only. The near-term outlook for the semiconductor industry has certainly been darkening and unpredictable, but it’s irrelevant to the long-term cash flows that an investor cares about. There very well could be money to be made in shorting AMD or other semiconductor stocks over the next few months, but I don’t play that game.
The chatgpt induced interest in AI and LLM provide a gathering tailwind for NVDA and AMD - The next few years are likely on the upswing due to interest in AI alone -
It will be interesting to see whether AI/LLM actually generates a big tailwind for AMD. I think NVidia will get lifted first since everyone associates them with successes in this area, AMD will have to show a couple of quarters of substantial revenue before the broader market will be persuaded.
Related to this, “NVIDIA NeMo” seems to be a toolkit for developers to create “convesational AI”. But nowhere have I been able to find whether “NeMo” actually stands for something, or is just a nice sounding name. Anyone know?
In theory Xilinx is a more powerful accelerator technology than NVDA
There is an expectation that connecting FPGA alongside the CPU using chiplets and infinity fabric will create an even more powerful (faster and with even lower latency & power consumption) solution to compete against NVDA (arguably the reason AMD bought Xilinx).
The key though is that NVDA software is a great moat and to challenge NVDA, the AMD’s software stack and the ease of use factor have to be upped. Xilinx is very strong in SW, so hopefully the osmosis effect has helped AMD kick their SW efforts into higher gear
I think it depends on what you are accelerating. FPGA’s are great for certain workloads, but others do better with neural networks, GPU’s, tensor cores, IPUS’s, VPU’s, and an explosion of other new and innovative products.
AMD infinity fabric is nearly identical to UCIE, CXL, and NVLink. They are all based on the standard PCIe hardware interconnect, but may use different protocols to maintain cache coherency, and different signal counts to meet the bandwidth needs of their system. The latest Xilinx FPGA’s connect to the host via either PCIE or CXL. AMD delayed Genoa to add the CXL interconnect, which will allow it to connect with many different accelerators. Given UCIE and CXL are “standard” and supported by Intel, AMD, NVIDIA, and many more they are likely to become the standard for accelerator attach, and chiplet interconnect. I suspect AMD/Intel/Nvidia continue to use their respective proprietary standards for multisocket interconnect.
I agree that software is currently the largest barrier for AMD, and Xilinx is a partial solution to that.
CXL is great but I am looking how/when/if they bring the fpga inside using chiplets.
Meanwhile, on the software front, here is a recent announcement on a. unified front end across the portfolio for AI Inferencing. This is potentially the start of the emergence of their upgraded sw strategy
The AMD UIF aims to work across AMD EPYC processors, AMD Ryzen processors, AMD Instinct accelerators, Xilinx Versal Advanced Compute Acceleration Platform (ACAP) targets, and Xilinx FPGAs.
The UIF looks like a great step forward. We will need to see the extent of industry adoption before any conclusions can be drawn.
Given AMD is a member of the UCIE (Universal, Chiplet, Interconnect, Express) consortium I suspect a AMD CPU-> FPGA combined chiplets in a package will use UCIE as the interconnect. Using the UCIE interface would allow AMD to mix and match with accelerators from different vendors depending on customer needs. Since these are customizations done during assembly, it would be a much lower development cost per custom solution than the tradition methods.