Introducing Astera Labs (ALAB)

Astera Labs (ALAB) went public in March 2024. They are a fabless semiconductor and hardware manufacturer who competes with Broadcom and Marvell. Their main competitive advantage is that their components and silicon are designed specifically for AI and cloud acceleration. Their customers are Nvidia, Intel, AMD, and hyper-scalers. Originally before the AI boom Amazon was their largest customer. They are a California based company with a lot of revenue that technically gets recognized in Taiwan where their parts are integrated by TSMC.

They have three main products,

  1. Aries - improves signal integrity, addresses data bottlenecks
  2. Taurus - improves networking issues
  3. Leo - increases memory bandwidth while reducing latency

Their software platform is called COSMOS and works across all three products. This software allows the integrators and hyper-scalers to monitor performance of the Astera components and measure critical metrics like signal strength and data throughput. The customers can design and simulate different configurations before settling on a final design.

The company has seen an acceleration in revenue growth since about a year ago once getting a number of design wins. Revenue has gone from,

36.9M → 50.6M → 65.3M → 76.8M → projected 95-100M

The company is profitable on an adj EPS basis and still slightly negative on a GAAP EPS basis. Reviewing their last Q2 they guided for the following,

Sequential Q2 revenue growth of 10-12% → actual 17.6% (76.8M)
Adj gross margin 77% → actual 78%
Adj operating expenses 40M → actual 41.2M
Adj EPS 0.11 → 0.13
New revenue guide of 95-100M (24-30% qoq)
New adj EPS guide of 0.16-0.17

Other highlights from the last quarter,

  • working closely with hyper-scaler customers to solve challenges across diverse AI platform architectures
  • favorable secular trends
  • record revenue of 76.9M is up 619% yoy and 18% qoq
  • adj operating margin 24.4%, adj diluted EPS 0.13
  • expanded cloud scale Interop Lab to Taiwan
  • new R&D center in India
  • focused on three core technology standards: PCI Express, Ethernet, Compute Express Link - all generating revenue
  • differentiated architecture with COSMOS software suite
  • PCI Gen 5 connectivity in AI systems is delivering signal integrity and link stability
  • hyper-scaler customers are ramping new AI server programs
  • evolution is in commercialization of PCI Gen 6
  • offering seamless upgrade path from Gen 5 → Gen 6
  • have started shipping initial quantities of PCIe Gen 6 solution: Aries 6
  • 400-gig Taurus Ethernet SCMS have shifted to volume production
  • excited about the breadth and diversity of Taurus design wins
  • Leo CXL (Compute Express Link) shipped material volume for “preproduction cloud-scale deployment in data centers”
  • design wins across diverse AI platforms at hyperscalers
  • increasing average dollar content in next generation GPU based AI platforms (design wins in Blackwell will bring more revenue than Hopper)
  • “we see increasing content on next generation AI platforms”
  • design wins as hyper scalers compose solutions based on Blackwell GPUs
  • flexible silicon architecture
  • COSMOS software suite can be harnessed to customize the connectivity backbone
  • industry transition to PCIe Gen 6 will be a catalyst for increasing PCIe retimer content
  • can quickly deploy PCIe Gen 6 technology at scale
  • improving rack airflow while actively monitoring and optimizing link health
  • multi-rack GPU clustering application as new and growing market opportunity for Aries
  • excited about Taurus becoming another engine of growth
  • deploying Leo CXL controllers in preproduction racks in data centers
  • “very excited about the potential of CXL in data center applications”
  • beginning to ship volume into 400-gig Ethernet based systems in the third quarter
  • R&D expenses 27.1M, S&M 6.3, G&A 7.8M
  • Interest income 10.3M, cash flow from ops 29.8M, cash 830M
  • Q3 revenue guided to be 95-100m or 24-30% sequential growth
  • Taurus family to drive solid qoq growth with new 400-gig Ethernet systems ramp into volume production
  • remain aggressive in expanding R&D resource pool across head count
  • believe in early innings of AI ramp, Llama model requires 10x more compute
  • seeing penetration of retimer technology into general purpose servers
  • larger names jumping into the mix validates the retimer market
  • COSMOS software gives significant advantage over competitors
  • shipping preproduction volume for supporting the initial ramps of Blackwell GB200 based platforms
  • seeing a lot of growth from Aries Gen 5 going into AI servers
  • only way to keep AI servers fed is to keep GPUs utilized more and more to get data throughput
  • as protocols go faster and faster, see more demand for products
  • improving GPU utilization through increased data rates
  • Aries allows you to connect racks together
  • tools being made available to hyperscalers partners
  • Blackwell creating more challenges for hyper-scalers with power delivery, retimer helps to solve
  • already have design wins across multiple form factors of hyper-scaler GPUs
  • analyst “let me also add my congratulations on a very strong quarter and outlook”
  • Q2 revenues were driven heavily by broadening design wins
  • committed to whatever platform customers want to deploy
  • already multiple design wins in Blackwell family
  • customization is very broad based
  • Blackwell, we expect our PCIe content per GPU to go up
  • design wins lead to 6-12 months before production
  • platform engineers at company familiar with Gen 5 retimers easy to switch to Gen 6
  • “we are essentially the leader in the space, being the one that is getting the first crack at these opportunties, and we are doing everything we can to convert those things into design wins and revenue”
  • we see a lot of opportunity for our existing products

I’ve also reviewed the Q1 earnings, presentation at JP Morgan’s technology conference and the company’s S1 which I’ll review in a follow up post.

On valuation the company could still be considered pricey, but the company went public in March 2024 when AI hardware valuations were high. The stock price went up to $85 at one point and now trades at $41. From what I can tell, they’ve delivered well on their first two earnings reports and I believe Astera is a compelling investment opportunity right now.

59 Likes

That’s a remarkable bull case for this company. Is there a downside? Competition? Technology protection? What could go wrong with this golden goose?

12 Likes

@wpr101 thanks for this.

I’m just learning about $ALAB thanks to your post.
I haven’t made my own conclusions yet, but came across something interesting: a possible short-term headwind.

" ALAB has ZERO CONTENT in GB200 NVL systems. "

This is seemingly confirmed by $NVDA?

" The GB200 Grace Blackwell Superchip is a key component of the NVIDIA GB200 NVL72, connecting two high-performance NVIDIA Blackwell Tensor Core GPUs and an NVIDIA Grace CPU using the NVIDIA® NVLink®-C2C interconnect to the two Blackwell GPUs. "


Going forward:

  • " A short, simple channel that only passes through a single PCB (no backplane, no extra connectors/cables) does not need re-timing. "

    Will architectures migrate towards solutions that do not require $ALAB products?

    Will increasing demands of AI drive ever-more-complex architectures that require $ALAB products?

    Both maybe?

    Thanks again for introducing $ALAB; hopefully I can spend some more time on it. Wanted to get you some feedback asap.

  • 13 Likes

    Adding some more details of the other public documents on Astera Labs that are available,

    Q1’24 earnings call (May 7, 2024)

    • revenue 65.3M, +29% qoq and +269% yoy
    • adj operating margin 24.3%, adj EPS 0.10
    • we expect to see our average dollar content per AI platform increase and even more so with the new products we have in development
    • provide customers with a complete customizable solution
    • COSMOS software runs on entire product portfolio and integrated with customers’ operating stack to deliver seamless customization
    • see AI workloads and newer GPUs driving the transition from PCIe Gen 5 at 32 gbps to PCIe Gen 6 at 64 gbps
    • driven by AI workloads ethernet rates are doubling with insatiable need for speed
    • expect Taurus Smart Cable Modules to begin to ramp in the back half of 2024
    • COSMOS software has led to significant learnings and ability to customize the Leo memory expansion solution
    • Aries 6 enables a seamless upgrade path from current PCIe Gen 5 based platforms
    • new Aries product category expands our market opportunity from with the rack to across racks
    • Aries Smart Cable Modules support our COSMOS software suite to deliver a powerful linked monitoring, fleet management, and observability - customizable for diverse needs of hyper-scaler customers
    • during the quarter we shipped products to all major hyper-scalers and AI accelerator manufacturers
    • cash flow from operating activities for Q1 was 3.7M
    • analyst says on “XPU” shipments this year, “only half is going to be NVIDIA based, so it is starting to broaden out”
    • a lot of customers have not fully deployed their AI system, seeing incremental growth just from adding the different platforms that we have design wins in
    • Compute Express Link (CXL), “every hyper-scaler is in some shape or form evaluating and working with the technology”
    • excited about PCIe AECs now opening new fronts in terms of Clustering GPUs, meaning interconnecting accelerators
    • every few months new AI platforms open which give opportunities
    • one of these opportunities is clustering GPUs, one example being Nvidia’s NVLink the other PCI Express and Ethernet
    • PCI Express Retimers which are offered in the form of an active electrical cable, “very intense, very dense mesh connections, so they can grow very, very rapidly”
    • as the AI architectures evolve so do the connectivity challenges, some will be incremental and some completely new
    • still the only company sampling a Gen 6 PCI solution, no competition there yet
    • working on certain products which are developed ground up for AI infra
    • trying to invest quite a bit in headcount, especially R&D

    Astera presented at the JPMorgan Annual Global Technology conference, some highlights include,

    • analyst says about Astera, “their silicon and software solutions are integrated and powering 90% plus of the world’s AI compute servers and clusters”
    • GPUs are underutilized 50% of the time and Astera solutions speed up on data, networking and memory
    • built a very trusted relationship with partners and customers
    • customers will often come and say, “Hey, go build me this solution because I know you guys can do chips, you can do hardware, you can do software”
    • Aries PCI Express smart DSP Retimer has 90%+ of market share, gold standard in the industry for the data connectivity bottleneck
    • customers also run COSMOS APIs on their platforms to pull out telemetry information so they can fine tune and make sure running at peak performance
    • “They are telling us what new products to build, and we are building them at a feverish pace. So, lot’s more to come”
    • the ASIC manufacturers are bringing their custom chips to market at very rapid pace: “Google TPU next gen, Amazon Trainium, Inferentia, Microsoft Maya, Meta, AMD MI-300X”
    • we see a lot of pull from the industry
    • once you increase the complexity of the models, you need more AI platforms, and hyper-scalers are on a mad rush to really deploy them in their own unique wyas
    • we have a ringside seat to see these deployments happen
    • leading four hyper-scalers have said CapEx is increasing 45% yoy, good tailwinds for the type of business we are in
    • enable hyper-scalers AI platforms to own unique requirements, COSMOS gives deep diagnostic information providing a incredible amount of visibility into the telemetry information which is a huge advantage
    • Cloud-Scale Interop Lab where customer systems in lab, and run all the tests they would typically do in a data center but in a lab setting
    • leadership position in PCI Express, what has happened now is our customers and ecosystem partners will send their unreleased products to our lab
    • analyst says number one feedback from customers was impressive time to market and performance, close number two was the COSMOS platform and Interop Lab
    • integrating COSMOS software stack into their data center management system - very sticky part of solution
    • CPU talking to the SSD was huge problem for signal integrity, retimer in the middle each time
    • very unique position to see what exactly is impacting the data
    • back in 2019, learned a lot by being first
    • implemented solutions into COSMOS software and why so successful with Gen 5, now same thing is happening with Gen 6
    • expect Gen 6 deployments to happen in 2025 and cross over may happen in 2026
    • “PCI Express is going to Gen 6. Nvidia has made that announcement, others will follow”
    • making sure customers do not need to search for a backup or second solution
    • supported customers incredibly well from technical support perspective
    • very diversified manufacturing supply chain that is designed to support crazy ramps that hyper-scaler customers have
    • huge first mover advantage
    • once a hyper-scaler has figured out their connectivity infra, they do not go back to tweak it to save a few dollars, what is important is robustness
    • we have to stay paranoid (on competition)
    • Aries PCIe AEC Smart Cable solution has already secured design wins that are going to start shipping volumes to customers in the back half of the year
    • very happy with the trajectory in Aries Smart Cable Modules
    • “there is actually an incredible interest from the customers for all of our products right now”
    • pace of deployment is increasing
    • working on new products that are going to be “very, very important to our customers” … “the customers that really write the big checks. So I think we have our wagon hitched to the right customers”
    • “We are in the fastest growing market”
    • COSMOS software gets integrated into customers’ stack
    • Leo CXL opportunity over next few years is quite large
    • ROI for CXL is very clear
    • “design pipeline for all our products remains exceedingly strong”
    • 800 products
    • stand alone integrated circuits have very good gross margins
    • comfortable to maintain at least 70% margins

    From the S1 and other sources,

    • shipping to all major hyper-scalers and AI platform suppliers
    • system OEMs play important role in supply chain and are important customers
    • new AI systems require a connectivity backbone
    • platform supports flexibility, customization, observability, & predictive analytics
    • TAM 17.2B, 27.4B by 2027
    • bottlenecks addressed by products: 1) data & network bandwidth 2) signal integrity challenges 3) memory wall 4) interconnect reliability & observability 5) cloud/AI complexity
    • platform specific standard product (PSSP) - addresses unique need of each hyper scaler
    • real time link monitoring for fleet health management
    • platform optionality and supply chain flexibility, allows for different suppliers (so not completely reliant on TSMC)
    • form factors of ICs, boards, modules which accelerate time to market - improves ROI for hyper-scalers
    • software defined architecture allows flexibility to make modifications, customizations, and upgrades after deployment
    • focus on interoperability allows customers to deploy with confidence, considered trusted partner
    • COSMOS enables deep visibility into performance limitations of cloud and AI infra, provides pathway to continuously improve products
    • trusted partner for generational upgrades
    • virtuous cycle where gaining more insight from existing deployments
    • most of direct revenue comes from distributors who operate as logistics and order fulfillment service providers to their customers
    • strong relationship with TSMC, qualify multiple suppliers and vendors to weather supply chain disruptions
    • Delaware based October 2017 founding, 3 cofounders who worked a previous company together
    • in 2023 no customer greater than 1/3rd revenue, top three were 70%
    • Fidelity owns 15% of shares, Sutter Hill VC 8.6%, Intel 3.8%, other investors include Vanguard, Amazon, T Rowe Price, and Blackrock
    • GPUs/AI accelerators make up 70%+ of AI server cost
    • integrated into reference designs for NVIDIA, AMD, and Intel
    • COSMOS allows for predictive maintenance
    • FAE (Field Application Engineering) teams for onsite technical help
    • R&D offices in US, Canada, Israel
    • 267 employees US - 205, Canada - 31, China - 10, Israel - 7, Taiwan - 14
    • revenue by location is determined by billing address of customer - most revenue recognized in Taiwan
    • competitors are different for each product,
      Aries - Montage Technology, Parade Technologies
      Taurus - Broadcom, Credo (CRDO), Marvell
      Leo - Marvell, Microchip Technology, Montage Technology, Rambus Inc
    12 Likes

    The biggest risk I see is the dependence on TSMC, but that’s not unique to Astera and they claim to be ready to ramp at multiple different vendors fast in case of contingencies.

    Astera’s software driven platform is a differentiator with the COSMOS platform that allows for advanced diagnostics that others do not have. The Interop Lab let’s the customers to simulate real world conditions for the product.

    Competition seems mainly to be legacy players like Broadcom and Marvell with the difference being Astra Lab’s products were built with cloud and AI in mind while they legacy competitors are adapting existing products to meet newer demand.

    The IPO seems to have had a lot of hype around it especially with it being at that time a company with ~50M in quarterly revenue for a ~15B market cap. Looking at the investors on board it’s a whose who of both silicon valley and wall street investors, so this doesn’t appear to be a hidden company. The P/S ratio has come down from about 100 at the peak to 30 now as it’s closer to a 100M quarterly revenue coming up next quarter with a 6B valuation.


    @intjudo

    " ALAB has ZERO CONTENT in GB200 NVL systems. "

    This is seemingly confirmed by $NVDA?

    I believe some of the claims in the article are incorrect that the PCIe Retimers are 95%+ of revenue. The company says they have 800 products among three separate product lines. Additionally, I don’t think Amazon is their second biggest customer anymore although they were the first large customer that Astera had.

    The NVLink and NVSwitch is a solution competing with PCI Express, and Ethernet which are the other two main ways to get GPUs to work together. What’s hard to find information on is if NVLink and NVSwitch rely on components that Astra Labs makes. Astra has said they are in the reference designs from Nvidia and that Blackwell will generate more revenue that Hopper was. It’s possible that Astra could be losing ground in the NVLink area but making gains with Nvidia elsewhere. I’m not really convinced by that article highlighting some photos of early model Blackwell chips that Astra is not working with them. Since the company is guiding for 24-30% sequential revenue growth it does not seem like revenue is about to fall off as the article claims.

    One interesting aspect from the JP Morgan tech conference was that they said hyper scalers are putting their own accelerators alongside Astra’s for configurations. It’s not necessarily a winner take all scenario on the parts, where the hyper-scaler will use some of their own components and some of Astera’s.

    11 Likes

    Thanks for bringing this exciting company to the board, @wpr101! Revenue growth is blistering so far in 2024 and appears it will 3x to $350m+ for the year…pretty impressive acceleration from 2022 when revenue was $79.9m and 2023 when revenue was $115.8m. To think they will do $100m+ next QUARTER alone is amazing.

    So far I don’t have too much insight into this, but customer concentration is definitely something to key in on. Sounds like some years, all 3 hyperscalers are customers. This is from Astera’s S-1 (filed Feb 20, 2024):

    I hate to rampantly speculate on what revenue could be, but as this company has almost 180m shares outstanding, they’re a ~$7.5b company right now. So if the annual revenue could be $5b or $10b someday (like we can say for the CRWD’s and DDOG’s and ZS’s of the world), then great. But if this is a more niche opportunity, and they’d be doing amazing to get to even $1b in revenue, then I wonder if the price just isn’t attractive. We’ve seen ELF and CELH, though in very different industries, achieve $1b+ annual revenue and yet struggle to maintain even $10b market caps.

    Bear

    35 Likes

    @PaulWBryant, Exactly. That’s the problem with these small tech companies that have very exciting financials for the first couple of quarters as a public company. It sounds like there’s no end to this opportunity.

    800 products, my goodness, that alone is sort of mind boggling. I really don’t know diddly about these very sophisticated AI installations, but all the same, it’s hard to conceive of where 800 different products might go. It’s not like they have thousands of customers with a gajillion different computer configurations. How big is their customer base? 3 or 4 biggies?

    Do all of these products involve a different piece of hardware? How many of those products (if any) are exclusively software? How many are software built around the same piece of hardware? I’m not sure that it makes any difference, but 800 products for such a small company seems like a marketing nightmare, let alone inventory management problems.

    I can feel the FOMO building in me, but I don’t want to let my emotions run away with this opportunity. Maybe it warrants a very small starter position just to stay aware of it. But there’s a bunch of unanswered questions, actually, it’s more like a bunch of unasked questions. I don’t have a clue about how much I don’t know about ALAB.

    13 Likes

    There’s three main product lines with different configurations. The 800 “products” is more like different possible configurations and customizations from my understanding. I’ve heard it said at Starbucks there’s over a trillion different combinations of different coffees you can get, but it doesn’t make it much more difficult for the barista. The same way for Astera, they are following what the customer specifies and some of those products could support more throughput depending on the needs.

    There is no inventory that they manage. Astera is “fabless” meaning they are not the ones building the silicon themselves. The customer purchases and the third party manufactures produces and ships. Another option is the customer works with TSM or another third party to integrate the product in their larger design.

    The software COSMOS is not sold as standalone, it’s for using in conjunction with the hardware or silicon the same way that CUDA by NVIDIA is used for programming a GPU. The company classifies their solutions into silicon or hardware and the hardware portion has slightly lower margins.


    But if this is a more niche opportunity, and they’d be doing amazing to get to even $1b in revenue, then I wonder if the price just isn’t attractive. We’ve seen ELF and CELH, though in very different industries, achieve $1b+ annual revenue and yet struggle to maintain even $10b market caps.

    Their biggest competitor in the space Broadcom (AVGO) with a 750B market cap, and Marvell (MRVL) with a 60B market cap. Broadcom has 12.5B of quarterly revenue, and yoy growth of 43% on their latest quarter and not near a ceiling yet. This indicates to me the market is huge for these types of silicon and hardware components.

    There’s also a reasonable comparison to ARM which sells designs only and does no fabrication. They have about a billion in quarterly revenue but trade around a 135B market cap, or 40 P/S which is higher than Astera’s 28 P/S right now.


    Adding one article about their IPO that includes some details on the founders,

    21 Likes

    Astera Labs presented at the Deutsche Bank Technology Conference on August 29. I have a few updates on some new understandings of how their business works after this interview with the CEO,

    • The Aries line of their products accounts for 85-90% of revenue according to analyst, however the reason sequential revenue is projected so strong next quarter is a lot of sales of their newer product Taurus
    • When they started in this business there were 3-4 competitors who are now all gone because they couldn’t troubleshoot issues with the same Astera did with their COSMOS software. It sounds like some customers even used COSMOS to troubleshoot issues with competitors solutions. Their software is a massive differentiator in this field
    • They are agnostic how their parts are being used in many cases, however increasing complexity of systems drives growth and all the hyperscalers have custom use cases that require the product
    • Nvidia’s NVLink is a closed proprietary solution, “we don’t play there” they said

    Some other highlights from the conference include,

    • started the company with a goal to connect all the GPUs together
    • GPUs need to talk to each other at very high data rates
    • working on many new products
    • Aries Smart Cable Module recently expands the reach of PCI Express to seven meters
    • trusted partner to AI platform providers who are innovating at a rapid pace
    • they come to us now and say, “Look, we are planning to deploy our next generation of ASIC accelerator or a third party GPU. Here are the connectivity solutions we need. And Astera, you guys can do chips, you guys can do software here, build the solution for us”
    • COSMOS software harnesses the power of the hardware solution
    • our chips have become the eyes and ears of the connectivity infrastructure with the COSMOS software
    • make sure that customers can deploy all this technology in the fastest manner possible
    • ecosystem outside of Nvidia would use PCI Express or Ethernet derivatives and that’s additional opportunity to Astera
    • each hyper scaler is big enough they can demand a custom solution on power and cooling
    • no two clouds are the same, they are all big enough to require customization of the solution
    • Astera approach is to build products on the ground up with a software first architecture where micro controllers run all the protocol features
    • software that runs on the micro controllers is COSMOS
    • “the solution we give to a CSP 1 or CSP 2 is based on the same product, but customized to their application”
    • retimers recover the signal, understand what the message is, and retransmit a copy of the data, Aries does this with PCI Express
    • PCI Express is the nervous system of the AI server, all the different peripherals have evolved to connect to each other over PCI Express
    • analyst suggests that a bear case that retimers demand lessening as AI architectures are going for a more systems approach - examples of Nvidia’s Blackwell and AMD just bought ZT Systems
    • CEO refutes above comment saying they will still need connections for the NIC (networking)
    • analogy of a ballon, you push one side it will pop on the other
    • the need for retimer type products will continue
    • whether it’s the AI platform provider or the CSP may change, but the underlying need of the retimer is necessary
    • analyst says investors try and make it black and white, but Astera mix is different, each use case different, connections will be different but need will persist
    • how Blackwell gets deployed or other products is not our call to make, CSPs will choose to deploy it one way or the other
    • visibility is not based on a guessing game, based on backlog and preproduction shipments we have done
    • visibility also applicable to ASIC platforms where “we have even greater content”
    • deployment complexity is going to be rather large, clusters are very complex
    • customers will have a need for deep diagnostics and ability to understand what is going on with their clusters
    • COSMOS has these diagnostic features, when problem arose in field where able to find problems quickly and on competitor’s solutions
    • we are able to “iterate very, very quickly, which our competitors were not able to do”
    • quick iteration got us into this leadership position
    • COSMOS has become very rich and any new entrant will have to “spend that soak time”, soak time to get hardware up to par, but also their software up to par and then convince a CSP to switch solutions
    • if competitor is not ready with PCI Epress Gen 6 solutions, they will find themselves going after a much smaller piece of the pie
    • going Gen 5 to Gen 6 strengthens competitive positions, customers already have COSMOS deployed where is the easiest upgrade path, fully backwards compatible
    • analyst suggests that GPU utilization can be as low as 50% some times, much better to add retimers for total TCO than focus on getting rid of retimers
    • on designs for retimers price is not the first thing the customer is interested in, probably the third or fourth concern, companies want right performance and speeds
    • last number of years improving robustness through COSMOS service
    • GPU clusters run at the speed of the slowest GPU in the cluster
    • interoperability lab is where testing is done to figure out what is working - this gets folded back into the COSMOS software
    • Aries mostly sold as chip form factor but increasingly sold as smart cable
    • Taurus product (ethernet) competes with Credo (CRDO) and Marvell (MRVL), analyst asked about competition
    • Credo came to market first but their solution relies on two to three different cable suppliers, Credo not an expert in cables
    • Cable expertise relies on Molex, TE (TEL), and Amphenol (APH) so working collaboratively with them
    • Astera is different business model, only get to qualify solutions once, and when problems in field they know who to go to which is Astera
    • Taurus ethernet business is “just starting to bloom”, selling low volume 200 gig node this last couple of quarters, and 400 gig starting to ramp
    • Taurus currently address niche application for AI clusters but heading towards broad based adoption where most hyperscalers are going to add active electrical cable solutions
    • guided towards roughly 20M incremental revenue qoq, meaningful piece is driven by Taurus
    • Taurus on the AI side is going to be connecting and scaling both internal ASIC platforms and third party GPU platforms
    • Taurus-2 and Taurus-3 platforms, Taurus-3 will have 800 gig PAM4 (Pulse Amplitude Modulation with 4 levels)
    • took flexibility of COSMOS architecture and drove PCI Express over optics
    • CXL, new standard on top of PCI Express, used by Intel’s CPU and AMD’s CPU - ROI is very clear here for them which Astera demonstrated
    • “Intel will have a different approach. AMD has a different approach. ARM CPUs have a different approach, hyperscalers have a religion around how they manage memory”
    • we learn what their religion is, what needs to be done for each CPU platform and each hyper scaler and build that into COSMOS
    • COSMOS driving a very strong competitive differentiation compared to anybody else out there
    • 2026 and 2027 will start to see memory pooling applications which is very good for Leo CXL product line
    • COSMOS gives capabilities of managing their fleets in a proactive way and saving money on these huge investments they are making - a lot of value in that that Astera can capture

    Overall this conference presentation increased my confidence significantly in Astera because their software is a real differentiator in the marketplace. I was initially concerned about their Aries product line being 85%+ of revenue, but sounds like Taurus is already ramping significantly and enough to impact revenue visibility.

    There is still some concern of mine that NVLink could become the proprietary standard for connecting GPUs but sounds like there is a ton of room outside of NVLink for their products to take hold. My impression is if any company will try and compete with NVLink for a better solution, they are going to need Astera products as a starting block.

    23 Likes

    Thanks WPR that makes more sense now. What they are calling retimers are basically a repeater. It just takes in a signal and cleans it up and sends it back out. That is a commodity product and one I couldn’t invest in. To easy to produce and I expect that they will build cabinets with repeaters(retimers) in them and connect all the GPU’s that way. It only makes sense that each Data center will have a template that each hyperscaler will use to build them out.

    So the real meat of what ALAB has is Cosmos, a software suite that helps monitor and detect problems between GPU’s, if they have a moat that is where it will be.

    13 Likes

    Isn’t their Gross Margin (77%) evidence that $ALAB’s customers do not regard their products as commodities?


    Also: is anyone else thinking that eventually AI Factory cluster connectivity will expand from two dimensions into three dimensions?

    Two dimensions (current):

    • GPU Clusters connected via racks all on the same floor

    Three dimensions (future):

    • GPU Clusters connected via racks on the same floor, AND:
    • GPU Clusters connected via racks on the floor ABOVE them and the floor BELOW them

    If so, $ALAB has the product suite to make the necessary hyper-speed connectivity happen.

    My apologies for posting pure speculation but wondering if anyone else out there is thinking the same.

    I’ve taken a starter position.

    12 Likes

    Good point intjudo, but if optical transceivers move into this space then there won’t be the need for copper repeaters. I suspect with only a 7 meter reach that they will be going optical sooner or later.

    3 Likes

    Nvidia has made NVLink its standard for connecting GPUs within a server. Its GH200 “Grace Hopper Superchip” connects the Hopper GPU to the Grace ARM chip via NVLink, enabling what they call “coherent memory,” which is a way they efficiently share memory. I don’t see anyone building any sort of challenge to NVLInk. And I don’t see anyone caring too much what Nvidia is using internally within its products like GH200 or other self-contained servers.

    Maybe someday ARM or Intel will make enough GPUs to matter, but today people spending $35k on an Nvidia GPU aren’t going to mess around with some custom third party networking solution for in-server connectivity. And while I haven’t researched it, it appears Astera Labs secret sauce is software cleaning up degraded internal signals. Does NVLink have these problems? I don’t think so.

    The connecting of servers together is a different thing. Does Astera Labs try to play in that space? This is where Nvidia has InfiniBand (an open standard everyone else abandoned by Nvidia bought Mellanox becuase they didn’t want to compete) and the Ethernet Consortium finally stopped in-fighting enough to have Ultra Ethernet - but even here Nvidia has Spectrum-X for those that want ethernet compatibility and don’t mind spending more money with Nvidia.

    Maybe I’m missing what this company is about, but it may be that it’s mostly for connecting non-Nvidia GPUs internally to create servers?

    17 Likes

    From my understanding Astera products are incorporated into a lot of the reference designs for Nvidia and other chip manufacturers. This is from the S1,

    We have developed deep relationships across the data center infrastructure ecosystem by building our products to be interoperable with products from NVIDIA, Advanced Micro Devices, Intel, proprietary in-house processors from hyperscalers, as well as those from memory and storage vendors. At our Interop Lab, industry participants work together to ensure compatibility and stress test products early in their product development process. Our Interop Lab positions us at the front and center for new product adoptions within the ecosystem, ensuring better interoperability of our products in customers’ infrastructure and providing us with cycles of learning ahead of our competitors.

    For instance, we believe being integrated into reference and/or commercial designs provided by industry leaders like NVIDIA, Advanced Micro Devices, and Intel demonstrates their trust in us as a reliable supplier. By aligning our solutions with their established designs, we gain understanding of their product architectures, which not only expedites our product development process, but also ensures better compatibility with a wide array of other hardware elements within the ecosystem. The resulting robustness of our products embeds us deeper with our customers, further strengthening our relationships across the data center infrastructure ecosystem.

    There was a lot of mentions of Blackwell and Nvidia in their latest earnings and how the relationship works. Blackwell itself is being used for a ton of different customized used cases.

    We ship and support our hyperscaler customers initial program developments that are based on Nvidia’s Blackwell platform, including GB200. We look forward to supporting more significant production ramps in the quarters to come.

    Second, we see increasing content on next-generation AI platforms. Nvidia’s Blackwell GPU architecture is particularly exciting for us as we expect to see strong growth opportunities based on our design wins as hyperscalers compose solutions based on Blackwell GPUs, including GB200 across their data center infrastructure. To support various AI workloads, infrastructure challenges, software, power and cooling requirements, we expect multiple deployment variants for this new GPU platform.

    For example, Nvidia cited 100 different configurations for Blackwell in their most recent earnings call. This growing trend of complexity and diversity presents an exciting opportunity for Astera Labs as our flexible silicon architecture and COSMOS software suite can be harnessed to customize the connectivity backbone for a diverse set of deployment scenarios. Overall, we expect our business to benefit from the Blackwell introduction with higher average dollar content of our products per GPU, driven by a combination of increasing volumes and higher ASPs.

    Our learnings from hundreds of design wins and production deployment over the last several years enables us to quickly deploy PCIe Gen 6 technology at scale. As Jitendra noted, we are now shipping initial quantities of preproduction volume for Aries 6 and currently have meaningful backlog in place to support the initial deployment of hyperscaler AI servers featuring Nvidia’s Blackwell GPUs including GB200.

    From an analyst,

    Congratulations on the strong results. During the quarter, lots of concerns around your large GPU customer and one of their next-generation GPU SKUs, the GB200. Glad that the team could clarify that your dollar content across all Blackwell GPU SKUs is actually rising versus prior-generation Hopper.

    We have several design wins for PCIe Gen 6 like we shared in today’s call that are all designed around the next-generation GPU platform, specifically the Blackwell-based GPUs from Nvidia, which I publicly noted to support Gen 6. So we’ll continue to work through them. We are currently shipping preproduction volume to support some of the initial ramps, including for GB200-based platforms. So overall, we feel good about the position that we are in, both in terms of Gen 5 as well as transitioning those designs into Gen 6 as the platforms develop and grow.

    Now if you double-click specifically on Nvidia or the Blackwell system, it comes in many, many different flavors. If you think about the overall Blackwell platform, it is really pushing the technology boundaries. And what that is doing is it’s creating more challenges for the hyperscalers, whether it is power delivery or thermals, software complexities or connectivity. As these systems grow bigger, they run faster, become more complex, we absolutely think that the need for retimer goes up. And that drives our content higher on a per GPU basis.

    we already have multiple design wins in the Blackwell family, including the GB200. We are shipping initial quantities of preproduction to the early adopters. And we do have backlog in place that serves the Blackwell platform, including GB200.

    The second reminder that I want to kind of note is that specifically for Blackwell, we expect our PCIe content per GPU to go up. Now what you’re asking is specifically about the deployment scenarios, which right now is evolving, right? So we have design wins for several different topologies, including the GB200. But if you look at the various different options that Nvidia is offering and how those are being composed and considered by the hyperscalers, that situation is evolving at the moment.

    And as new CPU/GPU architectures come about, just like the Nvidia’s Blackwell platform, we do expect to gain from it both on the retimer content as well as other products that we can service to this space.

    I would say it’s a little bit hard to generalize. It varies quite a bit. Even one particular platform, you can have different form factors. Even if you look at, let’s say, Blackwell, you have HGX, you have MGX, you have NVLs, you have custom racks that are getting deployed. And if you look at each one of them, you’ll find different amount of content.

    if you look at the Blackwell family, they use NVLink, which is a closed proprietary standard, which we do not participate in. But when somebody uses a PCI Express or PCI Express-based protocol at their back-end connectivity, then our content goes up pretty significantly because now we are shipping not only our retimers but also the smart cable – Aries smart cable modules into that application.


    My understanding of how it all works together is still evolving but there are many use cases of Blackwell which are not using NVLink because some of the other systems they need to connect the GPU cluster to are PCIe based. Overall what Astera is saying is the introduction of Blackwell will increase their sales due to the added complexity of the product and the need for customization. They’ve mentioned in a couple places that investors are trying to categorize in black and white terms but the evolution of the technology is incredibly complex. What they are seeing is the demand for their product is going way up via new design wins.

    17 Likes

    It looks like they are on top of the optical side also. Here is a good video on how they interconnect GPU’s and servers.

    Edit:
    If anyone remembers AAOI they had optical transceivers that connected Communication equipment at much longer distances. It is the same basic principle.

    6 Likes

    The 77% gross margin is certainly attractive, and the company has said the customers are not selecting on price, but more on speed and specs to ensure they system they are building meets performance expectations. It does sound to me like Astera has leverage on pricing which is something I really like especially after watching Supermicro margins tank recently.

    The data center transitioning to a 3-D system of working is fascinating and possibly backed by a few new innovations with one of them maybe being Astera’s products. The Youtube clip shared by @buynholdisdead above says that the products work in the following “topologies”,

    1. chip to chip
    2. box to box
    3. rack to rack
    4. row to row

    Additionally, those topologies can be connected over both copper or optical interconnects. They’ve said their cables can reach up to seven meters which seems like enough to go through the ceiling and connect GPUs together which reside on the floor above.

    Anecdotally I live in an area where a lot of data centers are being built and most of the buildings look like two or three story buildings that are very wide. However, there is one newer data center being built where they dug massively under the ground and now the data center is multi-level as the concrete is being put in. Currently it looks like maybe 7-8 stories and I am wondering if the innovations by Astera and other power generating companies is allowing data centers to be built vertically now.

    11 Likes

    The stock closed 10% down on Friday, at one point it was down 14%. Does anyone know why?

    2 Likes

    I’m no expert either, but NVLink is for connecting within a system, not between systems. That’s where InfiniBand comes in, or Nvidia’s own ethernet compatible product line, Spectrum-X.

    So, I’m a bit confused now, does Astera Labs compete with NVLink or with InfiniBand/Spectrum-X/Ultra Ethernet?

    If competing with NVLink, then what are the numbers here? How many people building out servers eschew the Nvidia NVLink standard, which comes pre-packaged in reference servers, to build their own? And since this is within a server, are these “design wins” from companies like Dell and HPE and SMCI that design and build servers around Nvidia GPUs? Again, how many want/need custom connectivity rather than just something like a Grace Hopper (or soon to be Grace Blackwell)?

    What are unit volumes and revenue here, and what’s the real potential for future growth?

    10 Likes

    What do you mean by system here? Are you considering NVLink and NVLink Switch as separate to say that that NVLink doesn’t connect the systems but that the NVLink Switch can? Or do you mean that NVLink Switch stops at the rack level, and that’s where InfiniBand/Spectrum-X is needed to extend beyond that?

    I’m asking because the terminology can get confusing as terms are used in hardware to mean different things depending on the context. For example “GPU” can mean a single gaming graphics card, a B100 which is a single Blackwell chip, a GB200 which is two Blackwells connected to a ARM CPU via NVLink, or a GB200 NVL72 which is 72 Blackwell “chips” connected together through NVLink Switch. Additionally, Jensen has said the B100 or their chips are basically a bunch of GPUs combined together to make the Blackwell chip itself.

    From the Nvidia website they show this,

    With NVLink Switch, NVLink connections can be extended across nodes to create a seamless, high-bandwidth, multi-node GPU cluster—effectively forming a data center-sized GPU

    NVLink was released back in 2016 before Astera Labs was even founded, it’s really the Gen 5 NVLink which is new with Blackwell, so the solutions that Astera Labs have has been playing alongside NVLink in the ecosystem for awhile now.

    I believe when Astera Labs says they are not in the NVLink space, they mean they are not competing with Nvidia to connect their Blackwell chips inside of the GB200.


    So, I’m a bit confused now, does Astera Labs compete with NVLink or with InfiniBand/Spectrum-X/Ultra Ethernet?

    The Taurus line of Astera’s is their Ethernet product. I’m still trying to understand if Nvidia’s solutions are a direct competitor Taurus, but it does not sound like it. Will refer this AI response on this part, because I’m not that familiar with the subtle differences on how Ethernet works. This was asking if Taurus is a competitor to InfiniBand/Spectrum-X/Ultra Ethernet,


    And since this is within a server, are these “design wins” from companies like Dell and HPE and SMCI that design and build servers around Nvidia GPUs? Again, how many want/need custom connectivity rather than just something like a Grace Hopper (or soon to be Grace Blackwell)?

    My understanding is that quite a lot of businesses need customizations as they target solving their own business needs. Here was an analyst question from their Deutsche Bank conference,

    “And so how do you differentiate in Retimers versus the competition? I know you talked a little bit about the COSMOS software side of things. But just talk about it’s a standard PCIe, how do you or Astera differentiate versus what Marvell reports after the close tonight, you have the guys from Broadcom coming at that market too, what makes Astera special.”

    And the CEO answered,

    “You say something, let me answer that first, and then I will answer the competitive question as well, which is investors like to see it black and white. So maybe if I may kind of help understand the situation, what the mix of products will be, how exactly Grace Blackwell or Blackwell gets deployed or for that matter, other products get deployed, that’s not really our call to make. NVIDIA is going to do what they will do, CSPs will choose to deploy it one way or the other.


    What are unit volumes and revenue here, and what’s the real potential for future growth?

    Marvell just reported that the “data center end market” had revenue of 881M and +92% year over year growth. However, overall revenue growth for the company was -5% year over year.

    I looked into that report a little for highlights and a couple items stood out, “Within data, we expect custom silicon to the be the largest revenue growth driver”.

    They also said, “Q4 is going to be extremely strong in data center and AI, you can probably draw a line of sight to it”.

    @trailbird I believe the market was trading Astera down off of Marvell’s results yesterday. However, I see their results more as a positive for Astera because it’s a rising tide lifting all boats. Astera was up over 10% on Thursday too, so it could have just been traders taking profits.


    Overall it’s hard to provide a precise answer to the competitive landscape because there are innumerable amounts of use cases. For example just on the Aries product there’s chip level, box, rack, and row level connectivity in both silicon and cables, with optical being an option.

    The products coexist with Nvidia products. With Broadcom growing 40% on overall revenue and Marvell’s data center business growing 92%, the market for these types of solutions is large.

    I’m good with investing here without a precise understanding of how each product competes simply because the landscape is so complex. There are new products being released all the time, and as Astera says they are basically agnostic on how the product gets used. The customer needs their solutions to build what they want.

    12 Likes

    “Mainly” seems correct; you’ve also mentioned $CRDO.
    Looks like they came on the market in Jan 2022.
    Roughly similar market cap, Gross Margin and P/E vs. $ALAB.

    I haven’t learned much about them, but it seems $CRDO was actually first to market with AEC “Active Electric Cable”. I think learning more about $CRDO could be useful in learning the competitive advantages of $ALAB.

    These links are behind a paywall, but the first page of each is public and has useful info:

    Thanks for your work so far on this, @wpr101 . This is a lot to come up to speed on.

    11 Likes