https://www.nextplatform.com/2022/03/02/industry-behemoths-b…
When the hyperscalers, the major datacenter compute engine suppliers, and the three remaining foundries with advanced node manufacturing capabilities launch a standard together on Day One, this is an unusual, significant, and pleasant surprise. And this is precisely what has happened with Universal Chiplet Interconnect Express.
The PCI-Express interconnect standard and its predecessors have defined how peripherals hook into compute complexes for decades, and thankfully after a long drought of seven years getting to 16 Gb/sec data rates with PCI-Express 4.0, which spec’d out in early 2017 and first appeared in systems in late 2018, the PCI-Express standard looks like it can carry us to the end of the decade with a doubling of bandwidth every two years. This, of course, is a cadence that matches the Moore’s Law improvements in transistor density and throughput for compute engines of all kinds, which is why it is natural enough to look inwards and start using PCI-Express as the basis of a chiplet interconnect.
That, in short, is precisely what UCI-Express is going to attempt to do, and not a moment too soon with every compute and networking chip vendor looking at 2D, 2.5D, and 3D chiplet architectures as they snap their chips into pieces to make them more manufacturable at an economic cost as Moore’s Law slows down but performance and throughput demands on compute and networking devices rising faster than the thermal envelopes.
…
It is no surprise to us at all that the UCI-Express protocol is coming out of Intel’s Beaverton, Oregon facility and that Das Sharma is driving it and the author of the whitepaper describing its goals. And given that Intel Foundry Services wants to open itself up to manufacturing all kinds of chippery and the packages (direct ball grid array mounted or socket) that wrap around them, it is no surprise, either, that Intel wants an open chiplet standard. This will be a necessary condition for the mass customization and co-design that is going to be necessary for all kinds of compute and networking as Moore’s Law transistor densities increase but the cost per transistor does not necessarily go down.
…
Moving to chiplets is a way to help lower the manufacturing costs and increase the yields on packages, but there is an increasing cost in package manufacturing and more complex testing and validation. It would be interesting to see the data for the above broken out by monolithic and chiplet designs.
As Das Sharma points out in the whitepaper, UCI-Express is not just about having the ability to mix and match chiplets with different designs from different designers, which is a powerful concept indeed. (Imagine being able to make a package with a baby Xilinx programmable logic block, a set of AMD Epyc compute blocks, Intel CXL memory and I/O interconnect, and Nvidia GPU chiplets?) The other key driver of the UCI-Express standard – and why it needs to be a standard – is to create well-defined die-to-die standards and testing and validation procedures that mean mixed-and-matched chips work when they are assembled into a 2D socket complex or a 2.5D interposer complex.
…
It is interesting to contemplate how UCI-Express will allow components of the server motherboard of today to be brought down into the package or socket of tomorrow as well as being used to interconnect compute and networking elements inside of that package or socket. And with the addition of a UCI-Express switch – why not? – either on the package or on the motherboard, there are all kinds of interesting, fine-grained interconnect possibilities that could span several racks and with optical links could span rows in a datacenter. Imagine if any element within a compute complex to talk directly over a PCI-Express fabric in a few hops to any other element within a pod of gear, without InfiniBand or Ethernet with RDMA? Just get rid of it all, and talk directly. This is why we have said that PCI-Express is the unintended by formidable datacenter interconnect and that across a datacenter with the need for peer-to-peer links to all kinds of components, PCI-Express fabrics will be pervasive.
This is a big deal.