From a week ago.
I met with AMD CTO Mark Papermaster on the sidelines of ITF World, a conference hosted by semiconductor research firm imec in Antwerp, Belgium, for an interview to discuss some of AMD’s plans for the future. The highlights of the interview include Papermaster’s new revelation that AMD will bring hybrid architectures to its lineup of consumer processors in the future, a first. These types of designs use larger cores designed for performance mixed in with smaller efficiency cores, much like Intel’s competing 13th-Gen chips. Papermaster also spoke about AMD’s current use of AI in its semiconductor design, testing, and verification phases, and about the challenges associated with the company’s plans to use generative AI more extensively for chip design in the future. We have the full conversation further below.
…
Mark Papermaster: What you’re going to see in PCs, as well as in the data center, is more bifurcation of tailored SKUs and processers coming out. Because it’s really now where one size doesn’t fit all; we’re not even remotely close to that. You’re going to have a set of applications that actually are just fine with today’s core count configurations because certain software and applications are not rapidly changing. But what you’re going to see is that you might need, in some cases, static CPU core counts, but additional acceleration.
So, if you look at what we’ve done in desktop for Ryzen, we’ve actually added a GPU with our CPU. And that’s because it really creates a very dense and power-efficient offering, and if you don’t need a high-performance GPU, you can save energy with that sort of tailored configuration. If you do need tailored, extensive acceleration, you can still bolt on a discrete GPU. And the other example in PCs is the Ryzen 7040; we’ve actually added AI acceleration right into the APU.
But what you’ll also see is more variations of the cores themselves, you’ll see high-performance cores mixed with power-efficient cores mixed with acceleration. So where, Paul, we’re moving to now is not just variations in core density, but variations in the type of core, and how you configure the cores. It’s not only how you’ve optimized for either performance or energy efficiency, but stacked cache for applications that can take advantage of it, and accelerators that you put around it.
When you go to the data center, you’re also going to see a variation. Certain workloads move more slowly […] You might be in that sweet spot of 16 to 32 cores on a server. But many businesses are indeed adding point AI applications and analytics. As AI moves from not only being in the cloud, where the heavy training and large language model inferencing will continue, but you’re going to see AI applications in the edge. And it’s going to be in enterprise data centers as well. They’re also going to need different core counts and accelerators.