Battle between NVDA's Proprietary Spectrum-X Ethernet Platform and Open Standard UEC and UAL

Below is from an X account named “Notes on the Noise” @NotesontheNoise

https://x.com/NotesontheNoise/status/1978001099954311332

The OCP summit spooked investors in $ANET and $ALAB after $META and $ORCL selected $NVDA 's Spectrum-X ethernet platform. Here’s what you need to know

The core conflict is drawn between two opposing philosophies. On one side: $NVDA, leveraging its market dominance to advance a vertically integrated, full-stack “AI Factory” vision through a tightly controlled ecosystem

On the other: A powerful alliance of competitors and hyperscale customers: $AMD, $INTC, $META, $MSFT) coalescing around open standards (UEC & UALink) to create a disaggregated, multi-vendor ecosystem.

The centerpiece is the Spectrum-X™ Ethernet networking platform, which $META and $ORCL announced they will adopt. This platform validates $NVDA 's concept of the data center as a “mega AI factory”

$NVDA 's Spectrum-X (integrated AI Ethernet) is challenged by the Ultra Ethernet Consortium (UEC).

UEC, with 100+ members, aims to evolve standard Ethernet using the UET protocol to match or exceed InfiniBand performance via an open, multi-vendor standard.

This means the traditional choice between proprietary InfiniBand (for performance) and standard Ethernet (for cost) is collapsing. The new battle is over who defines the future of high-performance Ethernet

The Ultra Accelerator Link (#UALink) Consortium is directly challenging Nvidia’s proprietary NVLink. UALink defines an open, memory-semantic fabric designed to connect up to 1,024 accelerators.

If #UALink succeeds, it effectively commoditizes the critical interconnect layer, allowing hyperscalers to mix and match GPUs and accelerators from different vendors. This could exert significant downward pressure on $NVDA ’s formidable system margins

$META perfectly illustrates this tension. They adopted $NVDA 's Spectrum-4 ASIC for their new Minipack3N switch (pragmatic near-term performance), while simultaneously championing UEC/UALink and open software (FBOSS) to secure long-term strategic leverage against vendor lock-in

Outlook: $ANET: Short-term pressure due to direct competition from $NVDA 's integrated stack and silicon design losses at $Meta. Long-term bullish due to its leadership role in the open UEC movement and the massive expansion of the Ethernet-for-AI Total Addressable Market (TAM)

Outlook: $ALAB: Bullish. It is perfectly aligned as the key silicon enabler for the open, disaggregated rack and #UALink standards. Its product portfolio (Scorpio-X, CXL) provides the picks and shovels for the anti- $NVDA coalition. TAM is big enough to not impact the near-term

Outlook: $CRDO: Most Bullish. Its business is agnostic to the architectural war, focusing on essential physical layer components (AECs, DSPs). Credo differentiates by solving system-level reliability—introducing the “ZeroFlap” initiative for massive AI clusters

The contest ensures that the market will be dynamic and fiercely competitive. Hyperscalers are actively hedging their bets, building an “off-ramp” from single-vendor dominance while still leveraging the best available technology.

Note: I am not a technical expert in AI systems. I utilised Gemini Deep Research to carry out this analysis. There may be errors or omissions.

42 Likes

The information above is excerpted from this recording: Spotify – Web Player

At 16:16, Jose Najarro, from Motley Fool, talks about the threats Asterra Lab is facing.

7 Likes

Apparently a new open standard called “ESUN” (Ethernet for Scale Up Networking) was introduced in OCP (Open Compute Project which is being held in San Jose from October 13 to 16). ESUN is intended to be a non-proprietary alternative to proprietary solutions being promulgated by Nvidia and Broadcom AVGO. ESUN’s prominent backers suggests that it may be more widely adopted than the UAL standard that ALAB is designing products for.

This is all Greek to me, so I’ve asked Perplexity to explain ESUN and what it’s impact might be on ALAB. Here is the response:

Ethernet for Scale-Up Networking (ESUN) is a new open industry initiative, launched under the Open Compute Project (OCP) in October 2025, to evolve Ethernet standards and architectures for ultra–high-performance AI scale-up networking—the dense, intra-cluster connectivity used inside large AI compute systems.

Core Purpose

ESUN’s goal is to adapt standard Ethernet to handle the unique bandwidth, latency, and reliability demands of scale-up AI systems—those interconnecting hundreds of thousands of GPUs, XPUs, and accelerators within and across data centers. The initiative seeks to advance Ethernet beyond its traditional data center networking role into the high-speed, low-latency interconnect space currently dominated by proprietary technologies such as Nvidia’s NVLink or Infiniband.

Key Focus Areas

  • Open, Standards-Based Ethernet: ESUN focuses exclusively on open Layer 2 and Layer 3 Ethernet framing and switching for scale-up workloads—excluding proprietary host stacks or non-Ethernet approaches.

  • Lossless, Low-Latency Operation: Aims to define error-resilient and lossless Ethernet behavior optimized for large AI workloads, using techniques like link-layer retry, priority-based flow control, and credit-based flow control.

  • Interoperability & Modular Design: Promotes interoperability among XPUs (CPU/GPU/ASIC accelerators) and Ethernet switch silicon, enabling multivendor deployment without lock-in.

  • Collaboration with Other Ecosystems: Works alongside the Ultra Ethernet Consortium (UEC) and UALink Consortium, ensuring integration with other open efforts for AI interconnects and avoiding fragmentation.

Industry Participation

ESUN includes a broad alliance of leading technology companies:
AMD, Arista Networks, ARM, Broadcom, Cisco, HPE Networking, Marvell, Meta, Microsoft, Nvidia, OpenAI, and Oracle—among others—collaborating to define Ethernet as the foundation for AI scale-up infrastructure.

Strategic Importance

ESUN effectively positions Ethernet as a future-proof, open alternative to proprietary AI networking stacks. It addresses the growing market need for scalable, standards-based interconnects that support massive GPU and accelerator clusters—vital for AI training, inference, and HPC scale-up systems.

In summary, ESUN represents an industry-wide push to make Ethernet the universal, open-standard fabric for connecting large-scale AI clusters, combining high performance with interoperability and cost efficiency.

The launch of “Ethernet for Scale-Up Networking” (ESUN) as an open standard initiative will likely have significant and complex effects on Astera Labs (ALAB) and its role in promoting the UALink (UA Link) standard.

Astera Labs & UALink

Astera Labs is a core promoter and developer of the UALink standard, which is an open, high-performance interconnect protocol designed for scale-up AI infrastructures, enabling accelerator-to-accelerator communication across large GPU and XPU clusters at ultra-high bandwidth and low deterministic latency. UALink is specifically engineered for the scaling demands of modern AI training and inference pods, working at the physical and data link layers to provide efficient, lossless transport between accelerators.

How ESUN Impacts Astera Labs and UALink

  • Open Ethernet Push: ESUN is a new industry consortium aimed at evolving Ethernet for scale-up AI networking, potentially overlapping some use cases targeted by UALink. ESUN seeks to make Ethernet (with open Layer 2 and Layer 3 standards) viable for dense intra-pod communication, an area where proprietary interconnects (NVLink, UALink, etc.) have dominated until now.

  • Modular and Layered Approach: ESUN is designed to support various upper-layer transports—including UALink and Broadcom’s Scale-Up Ethernet (SUE-T)—but does not aim to create one “unified” protocol for all hosts. This fosters “flexibility” and “practical innovation,” but may fragment hardware and host software ecosystems in the near term.

  • Interoperability Opportunities and Threats: If ESUN becomes widely adopted, there is a risk that it could commoditize some functions currently fulfilled by UALink (e.g., basic fabric switching, framing, error recovery), especially at the physical and data link levels. However, UALink and Astera Labs may benefit by integrating with ESUN at the transport or host stack level, or by becoming key suppliers of modular IP and interconnect silicon that comply with ESUN (thus leveraging both standards).

  • Stock and Market Reaction: Astera Labs’ stock declined sharply after the ESUN announcement, reflecting concerns over potential diminished differentiation if Ethernet-based open standards reduce the need for proprietary accelerator interconnect silicon. Still, analysts indicate that Astera’s platform integrates both Ethernet and UALink, and its leadership in PCIe/CXL retimer and interconnect IP is still highly relevant as the AI ecosystem adopts more open, multi-vendor architectures.ESUN enhances innovation and interoperability for AI cluster networking, but increases pressure on proprietary and semi-proprietary standards like UALink to adapt, become more modular, or integrate deeper with Ethernet.

  • Astera Labs is likely to remain a core player, but may face lower margins and heightened competition as Ethernet-based solutions improve, unless it successfully positions UALink as a specialized protocol for ultra-low latency, deterministic, and lossless pod interconnects, while simultaneously providing seamless integration with ESUN for scale-out networking.

  • In the medium term, expect Astera Labs and UALink to emphasize performance, latency, and optimization roles within wider open Ethernet fabrics.

In summary: ESUN will pressure Astera Labs and UALink to adapt, but it also provides new opportunities if they successfully innovate and interoperate within the broader open standard ecosystem now forming in AI and cloud data center networking.

18 Likes

ALAB comments at OCP re: ESUN (from Perplexity)

At the OCP Global Summit 2025, Astera Labs (ALAB) emphasized its commitment to open standards and ecosystem collaboration for rack-scale AI infrastructure, acknowledging the accelerating industry momentum behind standards like ESUN (Ethernet for Scale-Up Networking). Astera Labs showcased live demos and presented technical sessions that highlighted AI connectivity solutions built on PCIe, UALink, Ethernet, CXL, and OpenBMC. The company stressed that innovation in rack-scale design depends on interoperability and open standards, and that its approach is to “collaborate, standardize, and accelerate” the development of next-generation AI compute platforms.

Regarding ESUN specifically, Astera Labs signaled that while ESUN standardizes physical and data link layers on Ethernet—potentially shifting industry focus toward Ethernet-centric solutions—the reality of near-term AI infrastructure remains hybrid, with a combination of CXL, PCIe, UALink, and Ethernet required to optimize connectivity and performance. Analysts noted that although ESUN does not unify upper-layer protocols like UALink, Astera Labs’ connectivity platform is designed for coexistence and integration, allowing rapid adoption of both open Ethernet and proprietary accelerator links within large-scale AI clusters.

In summary, Astera Labs acknowledged ESUN’s importance at OCP, reaffirmed support for open standards, and highlighted its unique role in enabling multi-standard, interoperable rack-scale AI infrastructure where Ethernet, UALink, and other protocols each play a complementary role.

21 Likes

This contains mistakes and is confusing, even to tech-savvy people, by conflating standards and products, as well as not differentiating between scale-up and scale-out. I’m not an expert, but I believe some corrections and explanations are in order:

First, there are different types of connectivity, Scale-Out and Scale-Up:
• InfiniBand competes with Ethernet, both mostly used for “Scale-Out” purposes, which is basically connecting servers to each other and then to the outside world.

• NVLink competes with PCIe, both mostly used for “Scale-Up” purposes, which is within a server connecting GPUs to each other and to memory.

Second,

InfiniBand is not propietary, it’s actually an open standard that was started by two companies that were backed/led by Intel, Sun Microsystems, Dell, Compaq, and HP. The companies that made InfiBand products all got acquired, with the last being the largest - Mellanox acquired by Nvidia (beating out Intel and Microsoft, btw) in 2019. Since not many were paying attention to HPC (High Performance Computing), much less AI, back then, the deal was approved by regulators.

That said, it may feel like InfiniBand is propietary, because for all intents and purposes, today only Nvidia makes top of the line InfiniBand products. All the other previous InfiniBand networking companies that were acquired had their products shut down by the parent companies. InfiniBand gained traction as the Ethernet standards bodies were slow to respond to the needs of HPC, and then totally missed AI.

While initially, only InfiniBand was fast enough for the large AI clusters being deployed, companies complained that their data center engineers were only familiar with Ethernet, and the AI “output” often had to be connected to existing non-AI servers to be further processed or delivered to the end user, and that had to go through Ethernet anyway. So, a push happened to make Ethernet faster, and now the latest versions of Ethernet are neck and neck with InfiniBand.

Nvidia’s Spectrum-X product is their version of bringing InfiniBand’s speed to Ethernet compatibility. Read more about it here: https://www.sdxcentral.com/analysis/inside-spectrum-x-nvidias-ethernet-networking-platform

On the Scale-Up side, NVLink is indeed proprietary, developed by Nvidia, and it provides direct and fast GPU to GPU connections. AFAIK, it’s still much faster than UALink, but it only works with Nvidia chips and Nvidia is the only supplier. UALink is a new standard, and is not the same as a faster PCIe.

I need to run out now, but I’ll try to come back with another post with more.

34 Likes

My high level view here is that open standards are a benefit to Astera whether they are the one driving the standard or not. I would be more concerned here if Nvidia and Broadcom were partnering up directly and looking to lock out other vendors. At the same time, this would force companies like AMD further down the path of needing open standards.

Astera’s only Ethernet product is Taurus which is their third largest revenue product behind Aries and Scorpio. Again, it doesn’t seem like a big negative to me that there could be a new standard here. Astera has mentioned they are agnostic to the technological changes or the customer demands; they can help the customer build what they want.

Where I see the general market having a misunderstanding is regarding the ASIC market, or building chips for specific tasks. This segment of semiconductors has grown faster than anybody in the industry anticipated. The hyperscalers are moving a lot of well known frequent tasks to ASIC chips.

A good analogy may be to think of Blackwell as a F1 race car. For the cutting edge of high performance on a task you will need that Blackwell level performance. However for more basic tasks you do not need all that performance, or in the analogy you wouldn’t want to drive an F1 race car to work, or up a mountain. Probably the strongest example of this evolution is Bitcoin which went from CPU → GPU → ASIC.

I am also seeing the market have large reactions to product announcements in this segment of AI semiconductors. Astera announcing Scorpio was enough to drive the stock 30%, while Nvidia and Broadcom product announcements are enough to make the stock go down significantly. Part of what I see happening is the market is struggling to price this level or growth. It’s somewhat unprecedented this level of growth in semiconductors recently and this leads to more frequent boom/bust cycles.

The stock went public in March of 2024 at $80 and then crashed to $40. It then rose to about $150 to crash to $60, and subsequently rose to $250 and crash down to $160 again. It’s already three boom and bust cycles on the stock in less than two years as a public company. To me its a sign the company is still misunderstood by the market, and a big part of the reason there’s going to be huge swings in this stock.

However, if you look at the underlying business the company has done incredibly well on the business side. They have beat and raised guidance every quarter so far, along with a blistering pace of product development and design wins. Their one slightly weaker quarter was due to getting locked out of the Chinese market. Management barely mentioned the Chinese market as a factor because they said the growth elsewhere more than compensates for the lost revenue there.

Lastly I’ll add the most recent update from Astera has been the 10+ design wins they had from hyperscalers, along with AMD committing to a long product roadmap on the standard that Astera is leading. I am skeptical of these bear cases which would make you believe that Astera would lose these design wins. It was just two months ago the company was saying directly they are working on 2027 plans with the major hyperscalers. My guess is when the company reports in a couple weeks the price is going to recover strongly on a big result.

41 Likes

Even more, Astera’s products are more software-defined, and so I suspect it wouldn’t be hard to them to adapt to whatever the prevailing standard winds are.

ASIC stands for “Application-Specific Integrated Circuit.” They are chips designed to run specific applications that have already been designed, implemented, and debugged. They are not general purpose. So, if you’ve got a workload you need to run often and execute quickly, it can be cheaper to buy and install ASIC-based servers than GPU-based servers.

But, if you are offering AI Compute as a Service, then you need to provide a general purpose service, and today that means Nvidia GPUs running CUDA. And if you’re developing new AI training or inference applications, you’re not ready to codify those in silicon (ASIC) yet.

The Bitcoin evolution from CPU to GPU to ASIC is a good example. GPUs surpassed CPUs because of their performance. And since the Bitcoin calculations are all well-specified and need to be performed at scale to mine, it pays to design an ASIC specifically for Bitcoin mining and use those cheaper solutions instead of general purpose GPUs.

Morgan Stanley’s note was about the Nvidia-Intel partnership potentially in the future reducing Astera’s existing PCIe content on Nvidia server boards that today connect Nvidia’s GPUs to ARM’s CPUs. Nvidia is continuing to push their NVLink solution, and now will eventually have Intel on board, so Nvidia’s GPUs will connect to Intel’s x86 CPUs - and will use NVLink. And the note does say they can’t predict the future and whether that connectivity will or won’t leverage an existing or future Astera Labs product. But, given the recent run-up, many investors decided now was good time to book profits.

Astera can’t talk about it publicly due to NDAs, but they know where just about everyone from Nvidia to AMD to the NeoClouds to the HyperScalers are going, since they all need communication products and Astera has proven themselves over and over.

37 Likes

These two are really good. They are calling this correction more valuation based than a matter of competition. https://www.youtube.com/watch?v=5f2LHC1RSz0

4 Likes

It’s worth taking a look at the track record this channel has on Astera. This channel posted a Deep Dive back on September 12, 2024 when the stock price for Astera was $45. It was one of the few channels I saw providing any coverage on the company, but here is what they said back then,

  • “There are some things we don’t like”
  • Valuation isn’t cheap
  • No product depth
  • “Is AI a bubble?”, downturn will come either way
  • “Are we buying?”, “For us the answer is no”
  • Small cap, doesn’t add anything to our personal portfolio
  • Already have Nvidia
  • Stock is in limbo because of Blackwell
  • “We don’t need extra fireworks” from a “Blackwell pure play”

Astera has never been considered a “Blackwell pure play”. It seems wild to me for a deep dive on a channel devoted to semiconductors, they would not understand this distinction. I didn’t see any mention on the recent video of the ASIC market (non-Blackwell) which makes up over half of Astera’s sales.

Now they are posting about Astera a year later after a massive run up in stock price, but there is no acknowledgement they got the call wrong previously. There is also no acknowledgement they had a misunderstanding about Astera being a “Blackwell pure play”.

In that current video linked they are again saying it’s over valued, and it’s cyclical. It seems like this channel is going to be a waiting a long time for the supposed AI bubble to pop.

Here is their original video on Astera for reference,

29 Likes

Which is particularly troublesome, as even just two months ago they re-iterated their rationale for CRDO over ALAB being that CRDO has Amazon (using ASICs, not Blackwells for the most part) and so isn’t tied to Blackwell like ALAB supposedly is.

But, even if they were right, a small company that is heavily tied into Nvidia will probably do better, percentage-wise, than NVDA itself. That alone is a reason to own some ALAB - maybe reduce some of your NVDA bucket (which has grown nicely) and deploy some of those funds here to juice returns.

Their channel is kind of like that great Steve Martin show “Only Murders in the Building,” in that it’s self-restrictive. And being the equivalent of “Only Investing in Chip Stocks™” means that they’ve been beaten over the head with the cyclicality of chip stocks over the decades. So, they instinctively look for boom-bust cycles.

They are now pointing to previous company guidance of “only” 100% YoY and claiming “FCF margins” aren’t what they should be. Which is odd, because AI says:

As of mid-to-late 2025, Astera Labs (ALAB) reports a significantly higher free cash flow (FCF) margin than Credo Technology Group (CRDO). Astera Labs has moved to strong profitability and cash generation, while Credo is still in the earlier stages of establishing consistent positive FCF.

If you really want to know what’s going on with chip stocks, grab the free articles on SemiAnalysis.com.

27 Likes