AMD acquires Pensando for $1.9 billion

AMD MAKES A BIG DPU MOVE WITH $1.9 BILLION BID FOR PENSANDO
https://www.nextplatform.com/2022/04/04/amd-makes-a-big-dpu-…

2 Likes

A nice addition to the big iron server portfolio. NVIDIA introduced their bluefield DPU a few years back and then Intel upgraded from a random collection of chips into the integrated mount Evans product last year.
In many modern server applications the CPU was spending up to 50% of the time working I/O traffic, which is now offloaded to the DPU.
Alan

1 Like

In many modern server applications the CPU was spending up to 50% of the time working I/O traffic, which is now offloaded to the DPU.

That’s a surprising statistic. A keyboard click has to be treated as an interrupt because the user may want a program to stop as in Ctrl C but nobody can type so fast that that causes problems (but maybe for Windows ;-)). When I was programming drivers, in the days of the PDP/11 and early VAX, the volume data transfers were mainly to/from disk which were always Direct Memory Access transfers : Hand the physical memory address to the device with a bytecount and a location on the disk to start the read or write, let the device do the work in other words. Some CPU slowdown to be expected from memory being busy but 50% is a lot.

Have things changed that much since say 1985? Should have got better but sometimes I get the impression software has gone backwards. My colleague once put in a polling option (spin on a bit) in our software as an option as opposed to an event flag. User demand apparently! Devices are so much faster nowadays but still seem to get OS’s in a twist. Had a condition on a Windows Server recently where disk access queues where getting up to 100, from my faulty memory Microsoft recommends this should be no higher than 5. You can handle it in order or you cannot was my thinking about Microsoft’s recommendations :wink:

So is a DPU covering up the software cracks?

Have things changed that much since say 1985?
Yes.

In 1985 if you had a 56kbps data rate, you were probably “on the backbone” of the internet. Maybe you’d have 10Mbps ethernet inside a room.
Today these DPUs are handling 100,000 Mbps (100Gbps)

So is a DPU covering up the software cracks?
I’m sure there are some cases of the extra processing power and extra network bandwidth being used in ways that make it so inefficiencies aren’t noticed.
For example, what might have been done by a single higher-power many-CPU server before may be spread across 40 machines in a rack now, with the DPUs handling the coordination between the 40 machines. And if one of those 40 has a malfunction, it gets deactivated and sits there waiting to be replaced at some time in the future. So now you’ve got 39/40ths of the capacity you had before. Is that a “crack” or an inefficiency being covered by the DPUs and extra network/compute? Sure, you can look at it that way. Or you can view it as a different paradigm for solving the problem - a paradigm that winds up being cheaper for the company to use, and based on that metric it’s better.

1 Like

I grabbed the number from one of the DPU articles I was reading.

Think about servers at netflix or facebook.
I am not sure about the servers that do google searches, but perhaps similar.

There is also the task of encrypting the traffic, as well as security and virus checking.

Alan

1 Like

56k was mighty fine

There’s so many old timers here (and some even older) I’m sure many people remember configuring email systems with dialup services.

Amazing that such basic text-based systems still look the same under the hood. HELO server!

2 Likes

56k was mighty fine

Sure was. I started out with 150 baud dialup lines. (From Western Massachusetts to servers in Boston. Then I was hired by Honeywell Small Systems Division. (The name got changed several times while I was there, but when I was hired it was SSD.) They had a Multics system (BCO) and direct connect lines within the building–most at 300 baud. I rewrote one of the terminal drivers (in Multics Lisp) and sort of as a reward I got a 1200 baud line. :wink:

Multics was very nice, even at 300 baud. If you made a mistake, say your program got into an indefinite loop. You would send a break and then you could look at the code at that point. Say you found an “=” that should have been a “>=” because of floating-point. You correct the typo, recompile the program, and restart from the point you stopped it. This was really handy when running a job with several hundred test programs.

Sure was. I started out with 150 baud dialup lines. (From Western Massachusetts to servers in Boston. Then I was hired by Honeywell Small Systems Division. (The name got changed several times while I was there, but when I was hired it was SSD.) They had a Multics system (BCO) and direct connect lines within the building–most at 300 baud. I rewrote one of the terminal drivers (in Multics Lisp) and sort of as a reward I got a 1200 baud line. :wink:

I remember when a 300 baud modem was the hot stuff as an undergrad… Once in a while a glitch would stick you with 110 baud but usually 300 was available, for not one but two users on campus at the same time!

A few years later 1200 baud dialup came and they could support up to two dozen users. It was basically science fiction – you could even run VI and it was almost as usable as the 9600 you’d get in the computer center, especially if you suppressed some screen redraw operations :slight_smile:

It beat walking an hour in the snow to spend an hour at a terminal in person :slight_smile:

1 Like

Today these DPUs are handling 100,000 Mbps (100Gbps)

Handling the speed alone isn’t that big of a deal, because the data does get DMAd to memory. But then you have to stitch the TCP packets together, send ACKs, resend dropped packets, etc. And all that is done in the kernel with the inherent cost of switching. There is a reason companies with high-bandwidth applications are trying to switch away from TCP.

But Pensando isn’t really targeted at those - at least in the use case I’m aware of. Pensando has a powerful routing and protocol engine on the chip. The idea is to replace whatever you’ve been using to perform your SmartNIC functionality with a Pensando NIC. So if AWS runs your load in a VM and has to dedicate say 10 cores (out of 100) to (en/de)capsulating and routing packets, you can instead use Pensando with all its HW acceleration, use 1 core to manage the data in the Pensando NIC and reclaim 9 cores for your load. Given the cost of these CPUs and the value of having bigger instances, this can be a huge win.

m

4 Likes

The idea is to replace whatever you’ve been using to perform your SmartNIC functionality with a Pensando NIC. So if AWS runs your load in a VM and has to dedicate say 10 cores (out of 100) to (en/de)capsulating and routing packets, you can instead use Pensando with all its HW acceleration, use 1 core to manage the data in the Pensando NIC and reclaim 9 cores for your load. Given the cost of these CPUs and the value of having bigger instances, this can be a huge win.

I don’t know whether Amazon/AWS’s “Nitro” hardware can be aligned with this properly, it may conflict somehow. I don’t have time to dig on it right now but I know they have some kind of smart NIC offload hardware. Other providers and server OEMs etc. seem more likely to benefit.

https://www.nextplatform.com/2020/02/03/vertical-integration…

Ah, here’s a good article on what the major cloud providers were doing with either proprietary or outside third party smartNIC-like devices two years ago. ASICs, FPGAs, a bunch of cleverness. And building the software stack to tightly integrate with it.

1 Like

Taxonomies are good.

https://www.servethehome.com/dpu-vs-smartnic-sth-nic-continu…

Recently, there has been a lot of confusion in the industry around what is a DPU versus a SmartNIC, or data processing unit. One of the key challenges here is that marketing organizations are chasing buzzwords and in some cases avoiding buzzwords which makes comparisons difficult. We are introducing the STH NIC Continuum in its first draft Q2 2021 edition to show how we are going to be classifying NICs at STH. We do a large number of NIC, server, and switch reviews in the industry, so we simply need a framework to discuss types of NICs, and that is what we have today.


Getting to the SmartNIC vs DPU discussion, the key innovation with SmartNICs over offload NICs is adding a more flexible programmable pipeline, which is something that DPUs incorporate as well. Given the confusion in the market and the fact that the “SmartNIC” term was used well before the “DPU” term was adopted by the industry, there is a lot of confusion. We looked over the traditional SmartNIC and DPU materials, and there became quite a clear change in the conceptual model. SmartNICs we are thus defining as NICs that have programmable pipelines to further enhance the offload capabilities from the host CPU.

SmartNIC Example Q2 2021
SmartNIC Example Q2 2021
In other words, although many may run Linux and have their own CPU cores, the function of a SmartNIC is to alleviate the burden from the host CPU as part of the overall server. In that role, SmartNICs differ from DPUs as DPUs seem to be more focused on being independent infrastructure endpoints.

When we surveyed what is being called a “DPU” today, offload and programmability are certainly key capabilities. The big difference was that vendors are designing the DPU in the spirit of the AWS Nitro platform to be infrastructure endpoints. Those infrastructure endpoints may attach storage to the network directly (e.g. with the Fungible products) those endpoints may be a secure onramp to the network (e.g. with the Pensando DSC products/ Marvell Octeon products) or they may be more of general-purpose endpoints to deliver compute, network, and storage securely to and from the overall infrastructure.

This may seem like it is a nuanced approach, but when we looked at what is in the market, there is a clear focus on products designed to be higher-end offload (SmartNIC) and independent network endpoints delivering services (DPUs.) With that, some of the confusion comes from the higher-end products marketing themselves as SmartNICs or DPUs, but we think they should be their own category that we are calling “Exotic.”

The category we are currently calling Exotic NICs are solutions that generally have enormous flexibility. Often, that flexibility is enabled by utilizing large FPGAs. With FPGAs, organizations can create their own custom pipelines for low latency networking and even applications such as AI inferencing being part of the solution without needing to utilize the host CPU.

Generally, though, there is a major difference between the SmartNIC/ DPU and the Exotic NIC. That flexibility and programmability mean that those organizations deploying Exotic NICs will have teams dedicated to extracting value from the NIC through programming new logic for the FPGA. With flexibility comes responsibility and that is why these solutions need to be categorized outside of the traditional SmartNIC and DPU categories. In many domains, solutions categorized as exotic can yield impressive results, but also carry additional design and maintenance considerations that make them attractive to high-end applications.

Greetings to “Eachus” and to one and all,

I have followed this Board for a while, and have particularly
been impressed by the posts of “Eachus.” I suspect that “Eachus”
will know the answer to my question, one that is bedeviling me
at the moment:

How or where does one gain access to all of the non-financial
discussion boards that are supposedly now accessible in “read
only” form?

Management at Motley Fool have notified all of us as follows:

“Non-financial boards have been closed but will continue to be accessible in read-only form. If you’re disappointed, we understand. Thank you for being an active participant in this community.”

I understand the above statement to mean that we can go to any of the closed
boards and at least read the previously written, existing posts. Yes?
If yes, how to find them (I’ve just spent a fair amount of time looking).

Thanks!

Tomjet

Not Eachus, but here is one way to do it.
I noticed they are no longer available when you click on “board home” above. However, if you click on “customize” you can add any of the closed boards to your favorites, then go to “favorites and replies” and you can click on the board from there.
Hope this helps,
Alan

1 Like

And the categories are still there - for example
https://discussion.fool.com/food-drink-10094.aspx
https://discussion.fool.com/hobbies-interests-10097.aspx

Or if you know the title, you can still search for boards - ex:
https://discussion.fool.com/FindResults.aspx?name=woodworking
(Search box is at the bottom center of the “Boards Home” page

I understand the above statement to mean that we can go to any of the closed boards and at least read the previously written, existing posts. Yes? If yes, how to find them (I’ve just spent a fair amount of time looking).

Look at your Favorites & Replies list. Should be at the top of the screen when you read messages here. (Middle of the third line.) I’ve gone to closed boards, which are still listed there. I’ve been a subscriber to the Motley Fool boards and the AMD board in particular from the beginning so I can tell you that not only are Closed boards listed there, but after a while, I unsubscribe which works fine. Recommending posts at a Closed board doesn’t work though.