Well, for me complex is bad and understanding is paramount. Here’s what I’ve been able to gather:
- Astera Labs mostly (almost entirely?) sells to the big cloud providers (aka “hyperscalers”) building out infrastruture, both AI and general purpose.
- When Astera talks about “third party accelerators” they’re talking Nividia, and AMD, Intel, etc…
- When Astera talks about “internally developed accelerators” they’re talking about companies developing their own AI chips, like Amazon and Google and Tesla are. I don’t know that any of them are customers, though.
- Astera Labs’ products are sold both for AI-centric as well as traditional general purpose computing systems, but it appears AI is the growth market.
- They have networking/connectivity products in 3 technology areas:
• PCI Express
• Ethernet
• Compute Express Link (CXL)
From what I can tell, CXL is not currently a revenue stream, as CXL 2.0 CPUs haven’t launched yet. Maybe a 2025 thing.
Ethernet has its own set of competitive issues I think. From Nvidia’s own Spectrum-X to companies like Broadcom and others on the upcoming Ultra Ethernet standard. That isn’t to say that Astera Labs isn’t itself competitive in this area, it seems to be.
Which leaves us with PCI Express products. The new PCIe standard, known as Gen6, is coming, and Astera Labs appears to be on the forefront there. However, the concern right now is whether Nvidia’s Blackwell processors will be good or bad for Astera Labs use. I read the Morgan Stanley 07Aug note on ALAB, and found this:
When NVIDIA ramps rack-scale products next year, retimer content declines, as many of the connections currently using PCI express will move to NVIDIA’s proprietary NVLink standard. For an 8 GPU server today, there are 8-14 PCIe retimers, but for NVL72 or NVL36 that number should go down to 1 or 2. This was a known issue at the time of the IPO, but expectations for the rack-scale products have continued to grow.
Note that “NVL36” and “NVL72” are new rack-scale products from Nvidia. The NVL72 combines 36 Grace CPUs and 72 Blackwell GPUs in a single liquid-cooled rack. It uses Nvidia’s propietary NVLink connectivity internally that enables it to acts as a single massive GPU and delivers 30X faster real-time trillion-parameter LLM inference. The NVL36 is half that size.
From Morgan Stanley:
Astera can’t really address this situation directly as it involves non-public customer information, but they reiterated their confidence that they will gain content on a per GPU basis for the Blackwell platform. That makes sense, as away from NVL72 we see an increasing number of PCIe lanes and thus retimers per server.
We certainly believe the company will gain content per Blackwell, but in order to outgrow NVIDIA, it will need to be substantial as we see Nvidia increasing price per card by nearly 40% already.
Morgan Stanley’s summary is:
Astera offers the best growth rates in our coverage, with multiple product cycles and a very strong position in key AI technology. There is also a scarcity of small cap assets with these exposures. The stock has been very weak as of late, with concerns over potential content in Nvidia rack-scale solutions. We trust management and acknowledge they aren’t able to talk specifics, but the issue has weighed on the stock.
We are lowering our multiple to 17x sales (previously 33x), which reflects compression in the AI space and some remaining risk on future Nvidia content. Our new price target is $55 as sales come up from $445mn to $520mn. This is a premium to large caps (NVDA 14x and AVGO 12x), but we note that there is a dearth of small cap assets with this exposure given that over 80% of ALAB’s revenue is directly driven by AI cloud spending, and they has the highest growth in our coverage. We stay EW on some remaining content growth uncertainties, but note that we view this name more favorably after the pullback.
Here’s a video of Astera Labs PCIExpress product(s):
They’re demonstrating a 7 meter copper cable where as the competition they claim is at a 3 meter limit. Their rationale is that as companies want to build out AI capacity, they’re limited in what they can do within a single rack by power limitations, and their products enable high speed coupling of multiple racks.
I don’t personally understand this portion of the AI market well enough to know which company’s tech is best, nor how well Astera Labs is tied into the hyperscalers, nor what Nvidia has is doing with Blackwell to create or remove opportunities for third parties on the hyperscaler AI build-out.