AEHR - Infiniband, NVIDIA, & AI?

As long as we’re having fun speculating about where AEHR might be headed, can anyone shed light on the use of photonics in NVIDIA Infiniband interconnections?

Infiniband is an international standard for very high speed connection between computers. It competes with 10 Gigabit Ethernet, which is more widely used in supercomputers.

You may have heard the word “Infiniband” tossed around in the course of an analysis of what’s going on for NVIDIA. Here’s how they tie together:

NVIDIA makes Graphics Processing Units (GPUs), that are able to do the kind of highly parallel operations that are needed in Artificial Intelligence (AI). NVIDIA had the vision to see where AI might be headed, and has consequently built their architecture – hardware and software – to support the transformation of entire data centers into a single enormous AI compute cluster.

“Huh, what??!”, you say?

A single NVIDIA H200 system combines a boatload (that’s a technical term) of GPUs into a single compute platform for doing AI computation: 256 Grace Hopper SuperChip GPUs, all tightly interconnected for maximum performance.

However, NVIDIA’s architecture doesn’t stop there. If you want your entire data center full of NVIDIA H200s to operate as a single compute platform, you can tie them together with Infiniband to multiply the size of your compute platform. Now, you’ve got all your GPUs across your entire data center tightly coupled together, operating like a single compute platform for your AI.

Infiniband is the communications mechanism that provides this interconnection between the different H200 systems in your data center. Infiniband operates much faster than 10 Gigabit Ethernet – and when using fiber optic cable, can connect systems that are up to 10 km distant from each other (as opposed to the copper cable version of Infinband, which can only span 10 meters (aka 30 feet).

So if you want to have a datacenter filled to the brim with NVIDIA GPUs operating as a single huge system, you’ll interconnect them with NVIDIA’s Infiniband offering over fiber optic cable (they bought the last independent Infiniband vendor, Mellanox, in 2019).

If demand for interconnects between NVIDIA systems jumps as sharply as NVIDIA seems to expect, this could mean a new market opportunity for AEHR, to test the photoelectronic capabilities of the NVIDIA wafers.

It’s been at least 20 years since I last heard anyone talking about Infiniband, so could someone who is more up to date confirm what I’m saying here?

And can any of the folks who are more knowledgeable than I am on NVIDIA and AEHR chime in? Speculating is fun, but I prefer hard facts, where they’re available.



This is not an in-depth technical analysis (article posted below) but a high level view of the issue, particularly in reference to AEHR. At the time AEHR was guiding for $60 to $70 million in revenues and simply reiterating guidance (sound familiar?).

AEHR ended up with $64 million in revs.

Today guide is at least
$100 million. Problem - the two stock analysts who cover AEHR publicly are estimating $150-$160 million for this fiscal year. Meaning at the low end AEHR NEEDS $126 million or so over the next 3 quarters to hit this expectation. That is $42 million average for each of the next three quarters. AEHR gave absolutely no reason at all to expect anything close to the analyst expectations, which I’m sure will be coming down and is perhaps the real reason the market did not like earnings.

For me, this and discovering that customer concentration not gonna improve really this fiscal year. Maybe next year but not this year.


Even though imho AEHR and the minutiae of the various potential avenues of future growth have been flogged to death. And then some. And then some more, I have to chime in on this one as the statement above is just incorrect.

For this fiscal year, the two analysts publicly giving their estimates gave an estimate $100.93m and $105.33m for an average of $103.13m vs the guide of $100m. So pretty much in line with guidance.

The $150-$160m is for NEXT fiscal year.



Correction. My bad. My numbers for revenues is actually for next fiscal year (2025). This year is fiscal year 2024. Analyst estimates are at $100 million for this year. $150-$160 for next year.

Also the $64 million may be calendar year and not fiscal as AEHR came in at $74 million at end of fiscal year.

So put together, AEHR will likely marginally surpass the high end of guidance, which should be upgraded by Q3 for AEHR if the same pattern holds.

Also, if analyst estimates remain as is I think we’d all be very happy meeting fiscal year 2025 guidance of $150-$160 million. Share price will certainly increase materially on that if it comes to pass and AEHR can then guide even higher to fiscal 2026.

Apologies for original error. Was taken from Yahoo and sometimes misread if you go too fast.

This is just two analysts mind you and both are predicting longer term success here, which seems in the cards as long as ON keeps ordering at least at its current levels while these other customers start buying.

Also, from the article in previous post, the author did mention $5 million in silicon photonics recognized by AEHR. The May 2023 sale to a photonics customer probably
at least equaled all the prior photonics sales combined. But just one sale.


Tinker, I was wrong earlier, 65m total revenue for last FY ending May2023 is correct, not 74m. And 100/65 = 54% full year guidance made sense.



Glad I got one of the numbers right :stuck_out_tongue_winking_eye:!


Knowledgeable neither on NVIDIA nor AEHR but Tesla created Dojo to replace NVIDA which was not fast enough for them.

Denny Schlesinger


It’s a bit less simplistic than that. IIRC, back when Tesla had just started the D1 chip design effort, Tesla was thinking that they could create a chip designed for their specific image-processing/machine-learning operations. Nvidia has by necessity designed chips that are suitable for a wide range of AI operations, and Tesla at the time felt that since they knew their workflows they could design something more optimal for those specific workflows.

That attitude has changed recently, however. There are a few articles on this, here’s one from the Fool itself:

Which quotes Musk:

“We’re using a lot of Nvidia hardware. We’ll continue to use – we’ll actually take Nvidia hardware as fast as Nvidia will deliver it to us. Tremendous respect for Jensen and Nvidia. They’ve done an incredible job. And frankly, I don’t know, if they could deliver us enough GPUs, we might not need Dojo. But they can’t. They’ve got so many customers. They’ve been kind enough to, nonetheless, prioritize some of our GPU orders.”

I wonder if now that Dojo chips are in production Tesla is finding that they’re not much faster than the latest Nvidia has to offer. And Nvidia has even faster chips coming out early next year, so Dojo/D1 might even be eclipsed then.


I read the quote carefully. It does not say that Tesla will replace Dojo chips with GPUs. It says that Tesla will use NVDIA GPUs in addition to the Dojo chips without specifying in which applications.

Ir’s the same story as with batteries and soon with lithium. Tesla takes all it can to support its insane growth rate.

Denny Schlesinger


You said Nvidia chips were not fast enough for Tesla. Musk has directly said that’s not the case, that he would use them if he could get enough of them, and “might not need Dojo.”

In ER call where he made that statement, he also said, as I said, that Dojo was “optimized for video training.” But, given that his view now was that if Nvidia could deliver enough GPUs they might not need Dojo strongly indicates the performance advantage they’re seeing isn’t that great, at least for the dollars spent (over $1B through end of next year).