SMCI: Supermicro

A followup from my reply in this thread: SentinelOne - New article by Bert Hochfeld - #6 by jonwayne235

For those who hadn’t heard of SMCI (like myself until days ago), Supermicro provides servers and storage systems for markets such as data centers, cloud computing, AI, edge computing etc. This is a hardware and commodity like business.

They did 6.57B in TTM revenue, 10.62 EPS and currently at PE 15. Current market cap of about 9B.
They have solid guidance on next year’s 20% growth (fiscal 2023 ends next month), per their latest earnings:

…continue to expect our fiscal year 2024 revenue to be at least 20% year-over-year growth, and we are accelerating to reach our mid-to-long-term growth objectives of $20 billion per year….

Supermicro competes with Dell, IBM, and Hewlett Packard Enterprise.
But they are also just as deeply embedded with the big chip makers and have many big customer clients (META has been long speculated to be a big customer)
From their latest earnings call, May 2:

…record pace of GPU leading-edge design wins with growing back order, including winning at least 2 new global top 20 customers

…key partners including NVIDIA, Intel, AMD…"

In fact, even AMD’s CEO specifically mentioned Supermicro in their last earnings call among the other big name competitors: “For the enterprise, Dell, HPE, Lenovo, Super Micro and other leading providers entered production on new general server platforms that complement their existing third-gen EPYC platforms.”
(I’m just emphasizing that Supermicro is a real player in the industry- NOT a puny company just starting out- SMCI was founded in 1993)

SMCI attempts to differentiate itself by focusing on high end, high performance, high efficiency servers. In the current AI hype environment, this is turning out to actually be a key advantage. Power efficient/money saving products, top of the line hardware servers and fast time to market has allowed SMCI to gain share. Customers today want their servers made quickly.

From their latest earnings call:

With applications like ChatGPT that heavily count on large language models, LLM and generative AI, the state of AI infrastructure business has grown rapidly. This AI momentum has benefited Super Micro greatly as we are deploying many of the world’s leading and large-scale GPU clusters….AI, Storage, on-prem Cloud, Embedded and 5G Edge are all verticals we see the potential to greatly increase our TAM.

Having high power efficiency and air/liquid thermal expertise has become one of our key differentiators of success.
Provided there are no supply constraints, we can design, build, validate full systems and deliver turn-key rack level solutions to customers within a few weeks of placing an order instead of months from competitors…we are on track to support up to 4,000 racks per month of global manufacturing capability and capacity by the calendar year end. We are doing so by efficiently taking market share in the new and fastest growing markets.

Indeed, AI has fundamentally driven their business in the past months: servers made specifically for AI, increased to 29% of total revenues, up from 20% in the quarter prior.

Q3 results were driven by our high-growth AI/GPU and rack-scale solutions which represented approximately 29% of our total revenues and we expect significant, future growth. An existing cloud service provider customer represented more than 10% of revenues for the first time.

Some other very positive tidbits:
1.) SMCI insider buying occurred on May 10 and May 11, 2023. Very interesting that they purchased at $133/share each time….and that was done AFTER share price exploded from May 2’s report (from $104 to $133.95 in a single day!!!)

2.) SMCI has also bought back their own shares in the past quarter, indicating they felt they were VERY intrinsically undervalued at the $100 share price: Stock repurchases in Q3 of 150M dollars (1.55 million shares)
Please note that this is pretty significant. Insiders hold 13% of outstanding shares, there are currently only 56 million shares, and only 40ish million shares float- so this stock is prone to volatile squeezes up in price if investors and institutions want to accumulate and hold. Which has been clearly demonstrated by its rocketship price increases in the last several days.

Even an analyst pointed out the buyback during the May 2 call!

Very impressive buyback rate of $150 million in a quarter, $100 a share. So a very strong statement that the shares are attractive prices. And nice to see that backed up with the $10.5 to $11 share fiscal year '23 guidance.

3.) My opinion is this company remains severely undervalued to its peers on a relative comparison. SMCI is forecasting growth of 30% (ending June 2023 quarter) and continues to reiterate at least 20% growth in fiscal 2024 and beyond. Their “mid to long term” target is $20B in revenue.

SMCI has a PE of 15

HPE (hewlett packard) has a PE 22 and grew 12% YoY last quarter.

DELL has a PE of 14, but forecasted to grow only 1.5% a year the next 3 years.

If you ask me…I wouldn’t be surprised to see SMCI reprice to a PE of 22 or higher before their next earnings release.

The biggest NEGATIVES to this company:
It should be emphasized again that this is a nonrecurring revenue hardware business. They can encounter supply chain problems, which they did in the latest quarter, but they claim it has been largely resolved, and were optimistic on Q4 guidance:

Fiscal Q3, 2023 revenues were $1.28 billion, down 5% year-over-year and down 29% quarter-over-quarter which was below our initial guidance range of $1.42 billion to $1.52 billion. The shortfall was primarily due to key new component shortages for Super Micro’s new generation server platforms which have been mostly resolved to date.
….delivering Q4 revenues in the range of $1.7 billion to $1.9 billion. If supply conditions improve sooner, we expect to be above that range, even some economical headwind is still ahead.

But it is important to note that in the Q/A, it is demand that is definitely NOT the key problem, even in this macro slowdown:

There was a shift – there was a dramatic shift toward new AI solutions… And so therefore, it was larger than anyone expected. And so the parts availability constrained the amount of shipments that we could do. Obviously, we anticipated a slower quarter because the third quarter is seasonally slower. And we also mentioned – you’re correct, we also mentioned some customers that tapped their brakes and moved out to Q. But it was really the component shortages that hit us this quarter.

And their gross margins are not high:

The Q3 non-GAAP gross margin was 17.7%, this was down 110 basis points quarter-over-quarter and up 210 basis points year-over-year. The decline in the non-GAAP gross margin was due to our efforts to gain market share in the rapidly growing AI server platform market with aggressive pricing targeting strategic large enterprises, data center and CSP customers.

78 Likes

Came across a tweet that featured an analyst coverage (released today) that included SMCI as a top pick, with price target of $400: https://pbs.twimg.com/media/FwqzgfRagAAOzIx?format=png&name=large

And a product press release from May 22 today: Super Micro Computer, Inc. - Supermicro Launches Industry’s First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling – Reduces Data Center Power Costs by Up to 40%

Supermicro Launches Industry’s First NVIDIA HGX H100 8 and 4-GPU H100 Servers with Liquid Cooling – Reduces Data Center Power Costs by Up to 40%

25 Likes

I know SMCI is also cheap and very profitable (looking at gaap profit per employee) but I have two short-term issues in addition to the inherent limitations of a business like this:

1/ Has appreciated too much, too fast
2/ Too much exposure to retail hype.

Just over the last two weeks, it was a free recommendation from two services–after a bunch of folks had talked about it for weeks.

EDIT: this is not about market timing but about potentially getting sucked into a hype train. After 2020-21, I am very wary of anything that is all over social media as a “great opportunity,” and the smaller the cap, the more so.

22 Likes

I agree with you 100%.

As I explicitly said previously:

It seems like there is plenty of upside…especially if the AI bubble is only just starting to inflate.

Yes- one reason I entered SMCI is to specifically try to ride the AI bubble hype narrative.
Nothing wrong with that.

But you can do that with trash companies too, like the ticker $AI, which even appreciated way more than SMCI (40% for AI, 20% for SMCI) did in the last 7 trading days.
Except I wouldn’t be comfortable holding $AI, as its fundamental business isn’t correlating. It could easily collapse in the next week after the short squeeze ends.

SMCI is also a volatile stock but the key difference is the business strength is actually there (profitable, with growth) and I believe still does appear cheap, despite its parabolic advance.

11 Likes

Actually, I disagree on AI as a product though the company is suspicious indeed. That’s the ultimate, on paper, AI stock. Their enterprise-level AI is used by USAF, the French and Italian electric companies, big oil industry players, and others.

It is supposed to be a true, deep-learning, predictive AI.

For example, instead of changing your car oil at fixed intervals every 3, 5, or 10,000 miles, it tells you when to do so. Aircraft parts and labor are of course extremely expensive and downtime diminishes combat readiness. Oil industry machinery and components are just so numerous so any efficiencies pile up. For electric companies, it allows you to better map how it is being stolen, for example more so in Sicily than in Venezia-Friuli. And so on.

In addition, like IOT, it is not heavily dependent on tech companies’ spending.

But the reason it got involved in current hype IMO is their launching of a different kind of AI product for enterprise search which correlates with the current Chat GPT inspired mania.

The problem I had and the reason I sold back in spring 2021 or thereabout was the lack of progress. Their deals are massive but if their AI is so amazing, why did growth collapse once they went public? Siebel spins a great story but the numbers have not been there.

But I would not call AI a “junk” company. For me it is similar to NET but on steroids: massive disconnect between narrative and numbers.

I re-started following it recently and will listen with interest to the call though it is not on my list of possible buys. I had a starter that I rode for a strong quick gain but don’t know how to deal with the hype so I am sitting out. I am also not a fan of the hype-friendly direction.

I would much rather see big contracts with new big customers. So watching from safe distance :slight_smile:

Good luck riding SMCI!

11 Likes

Super Micro Computer, Inc. (SMCI), like its name says, is a hardware company. It’s competing with other hardware companies. The talk about AI is important but incidental. AI will be a huge driver of hardware, servers, and storage. The importance of one fact that probably most people miss is energy consumption. Two decades or more ago there was a huge discussion of the need to mine more coal to power the server farms. Companies like Apple and Google have solar farms to power their server farms. A main driver that led to ARM Holdings’ success was the low power consumption that made ARM chips best for mobile devices such as laptops, notepads, smartphones, and IOT devices that took over edge computing.

To make server farms effective (fast) you want everything to be as close as possible and servers generate a lot of heat. That means that server farms need lots of cooling and robust sources of power.

Reducing 40% of power costs is huge. Reducing power consumption at the server level cascades to reductions in many ancillary facilities like cooling and power supply and in this age of ‘Climate Change’ this also reduces CO2 emissions which is the politically correct thing to do. Disruptors tend to have the advantage once they have been accepted by the market.

I would concentrate of SMCI hardware sales as the primary driver of SMCI, the stock. While the stock price has grown tremendously recently, the market is finally emerging from the 2022 crash which attenuates the fast growth issue.

Denny Schlesinger

34 Likes

NVDA’s eye opening numbers today:

reported 7.193 BILLION in Q1 revenue, driven largely by data center due to AI demand, and next quarter guidance of at least 11.00 BILLION in Q2 revenue.

They are guiding for 53% sequential, QoQ growth. FIFTY THREE PERCENT.
Keep in mind for the data center category specifically, it means they are guiding for essentially DOUBLING, yes, 100% sequential growth, in data center, QoQ. [yeah, even an analyst on NVDA’s call spoke out and highlighted this as doubling!]

That is off the chart NUTS. !!!

Congratulations to those holding NVDA!

Now turning to SMCI, I can only see this report as very positive tailwinds for supermicro.

NVDA earnings call had the following said:

Generative AI is driving exponential growth in compute requirements…Generative AI drove significant upside in demand for our products…when we talk about our sequential growth that were expected between Q1 and Q2, our generative AI large language models are driving this surge in demand, and it’s broad-based across both our consumer Internet companies, our CSPs, our enterprises and our AI start-ups.

Hammering the point home that AI demand is doing the heavy lifting in growth today

…we have visibility right now for our data center demand that has probably extended out a few quarters

Wow- NVDA claims they have clear visibility into this explosive growth to continue perhaps at least YEARS (several quarters)

…what happened is when generative AI came along, it triggered a killer app for this computing platform that’s been in preparation for some time…The world’s $1 trillion data center is nearly populated entirely by CPUs today. And at $1 trillion, it’s growing of course, $250 billion a year. But over the last 4 years, call it $1 trillion worth of infrastructure installed, and it’s all completely based on CPUs and dumb NICs. It’s basically unaccelerated.
In the future, it’s fairly clear now with this – with generative AI becoming the primary workload of most of the world’s data centers generating information… he fact that accelerated computing is so energy efficient, that the budget of a data center will shift very dramatically towards accelerated computing, and you’re seeing that now. We’re going through that moment right now as we speak, while the world’s data center CapEx budget is limited. But at the same time, we’re seeing incredible orders to retool the world’s data centers.
And so I think you’re starting – you’re seeing the beginning of, call it, a 10-year transition to basically recycle or reclaim the world’s data centers and build it out as accelerated computing. You have a pretty dramatic shift in the spend of a data center from traditional computing and to accelerated computing with…GPUs and the workload is going to be predominantly generative AI.

Basically, there is a ONE TRILLION dollar pie, that is already growing yearly like pedal to the metal, that is going to be resliced and rebought for GPU hardware.
Guess who will also benefit? Supermicro, which offers the top of the line NVDA, AMD, Intel gpu products packages into power efficient money saving servers, to sell to enterprise customers that all want to retool their data centers into AI solutions and accelerated computing.

Remember, SMCI saw last quarter that AI solutions increased from 20% to 29% of total revenue, QUARTER over quarter.
That is a huge deal and clear sign they are also beneficiaries of this massive AI demand tailwind.

Let’s keep in mind the timing context as well. SMCI reported last quarter end date was March 30. AI grew 20 to 29% of total rev.

Now NVDA reported last quarter end date was April 30. They grew data center category only 19% sequentially, but are guiding for 100% sequential growth next quarter end July 31.

So - this can only mean AI revenue for SMCI will be way bigger than sandbagged for SMCI’s next quarter.

I would not be shocked if they report next Q4 revenues above their high end guide of 1.9B. Perhaps they will come in at 2.0B, and deliver a spectacular outlook for Q1.

And, SMCI in afterhours has pushed to +12.41% gain versus yesterday’s close, probably due to the above correlating benefits with NVDA (which gained +24%).
I still feel even at afterhour all time high price for SMCI- it still remains ridiculously undervalued.

31 Likes

Jensen Huang said some things in the CC that are real eye-openers to me.

He really emphasized/highlighted their networking products (Infiniband) and even more than that he emphasized that $NVDA is not a chip company it is a data-center-infrastructure company:

"So, nearly everybody who thinks about AI, they think about that chip, the accelerator chip and in fact, it misses the whole point nearly completely.

And I’ve mentioned before that accelerated computing is about the stack, about the software and networking, remember, we announced a very early-on this networking stack called DOCA and we have the acceleration library call Magnum IO. These two pieces of software are some of the crown jewels of our company. Nobody ever talks about it, because it’s hard to understand, but it makes it possible for us to connect 10s of 1,000s of GPUs. "

“How do you connect 10s of 1000s of GPUs, if the operating system of the data center, which is the infrastructure, is not insanely great, and so that’s the reason why we’re so obsessed about networking in the company. And one of the great things that we have – we have Mellanox as you know quite well, was the world’s highest performance and the unambiguous leader in high performance networking, that’s the reason why our two companies are together.”

…and so – this whole area of the computing fabric extending connecting all of these GPUs and computing units together, all the way through the networking, through the switches, the software stack is insanely complicated.

He repeatedly made the point that delivering “generative AI” and “accelerated computing” at scale requires an incredible scope of components working together: software, compute, networking etc. He emphasized that $NVDA has been working for 15 years to create a datacenter architecture that gets all that orchestration right, and they had it ready just-in-time for the arrival of the ultimate “killer app”: AI.

And he emphasized that over the next 10 years it’s not just GPUs getting deployed it’s entire datacenters getting completely retooled to go from current/legacy CPU-based compute to GPU-based.

He closed with this:

“We are ramping a wave of products in the coming quarters, including H100, our Grace and Grace Hopper super chips and our BlueField-3 and Spectrum 4 networking platform.”

22 Likes

Any red flags with the comments on the NVDA call about accelerated computing being way more energy efficient?

Jensen Huang
Now, let me talk about the bigger picture and why the entire world’s data centers are moving towards accelerated computing. It’s been known for some time and you’ve heard me talk about it, that accelerated computing is a full stack problem, but it is full stack challenge, but if we could successfully do it in a large number of application domain has taken us 15 years.

If - sufficiently that almost the entire data centers’ major applications could be accelerated you could reduce the amount of energy consumed and the amount of cost for our data center substantially by an order of magnitude.

In the future, it’s fairly clear now with this – with generative AI becoming the primary workload of most of the world’s data centers generating information, it is very clear now that – and the fact that accelerated computing is so energy efficient, that the budget of the data center will shift very dramatically towards accelerated computing and you’re seeing that now. We’re going through that moment right now as we speak. While the world’s data center CapEx budget is limited but at the same time we’re seeing incredible orders to retool the world’s data centers

I don’t understand this fully but with this accelerated computing he mentions, will SMCI be able to piggy back and make it more efficient or is this possibly reducing the need for SMCI’s services?

4 Likes

Part of what you quoted in your post is this:

Does that sound like SMCI sales are threatened? Answer: No, it sounds like they’ll be seeing “incredible orders”.

Disclosure: My former 60% SMCI allocation popped to around 70% today. Yeah, indiscrete, etc…

Rob
He is no fool who gives what he cannot keep to gain what he cannot lose.

8 Likes

Super Micro uses NVIDIA chips

In close partnership with NVIDIA, Supermicro delivers the broadest selection of NVIDIA-Certified systems providing the most performance and efficiency from small enterprises to massive, unified AI training clusters with the new NVIDIA H100 Tensor Core GPUs.

Denny Schlesinger

14 Likes

I think the NVDA improvements in efficiency are separate from what SCMI is doing. If anything they should complement each other. My understanding is that the SCMI numbers for 40% reduction in power requirements come primarily from the liquid cooling. Liquid cooling is much more efficient than air cooling. A large part of the electricity used by data centers is used for HVAC to keep the equipment cool. The liquid cooling is built into the equipment that SCMI is making and is independent of the processors used no matter if they are from NVDA, AMD, INTEL, or someone else.

17 Likes

And here is further evidence that datacenter architecture is on the cusp of a massive overhaul, driven by AI. This time from Marvel’s CC this past March:

We expect generative AI to drive a massive transformation in data center architecture. We see a bigger opportunity for cloud optimized silicon for custom compute, trend we had extensively discussed over the last couple of years. In addition to compute, the level of scaling in these generative models requires a significant innovation and technology leadership in networking infrastructure to interconnect AI supercomputers. This requires ultra-high bandwidth links with low latency and sufficient reach, minimizing energy expended to move the massive amounts of data in these platforms is another important criteria.

We believe these requirements are best met by high-speed optical connections. Last year, we launched the industry’s first 800 gig PAM DSP and saw a huge ramp driven almost exclusively by AI applications. Our PAM DSP revenue from AI in fiscal 2023, more than quadrupled from the prior year. As AI models continue to grow in complexity, we expect that they will require more and more low-latency bandwidth.

Earlier today, we announced the industry’s first 1.6 terabit per second PAM platform, enabling a further doubling of bandwidth within the AI cluster. As investment in AI accelerates, we see this as a new growth engine for our electro-optics portfolio.

Things like density (…of both storage and compute), low-power-easy-cooling, and (especially) low-latency network connectivity within the datacenter are at the top of the list.

9 Likes

Thanks that makes sense, just starting to learn about SCMI :slightly_smiling_face:

6 Likes

It is a differentiator for SMCI, as it is typically ahead of its competitors on the cooling/efficiency front.

9 Likes

I wonder with this AI data center architecture retooling what spillover potential there is for Pure Storage who have (or had) a joint AI architecture stack solution as well as an “Elite” partnership with Nvidia.

It appears Super Micro are also a potential competitor to Pure on the storage front…

15 Likes

Without wanting to hijack the Supermicro subject and I can’t help but think it is bad form to reply to ones own posts but the added dimension of interest with Pure Storage is that:

It has finally reached a lower TCO of its latest all flash / advanced flash storage vs disk drives (still 80% of the data center) and has released FlashBlade E specifically to attack the large unstructured data storage market and also aimed at the AI revolution.

Pure has secured lumpy revenues from Meta which it does not forecast. If Meta (or other hyper scale DC operators), are engaging in massive amounts of AI driven retooling then Pure could be in line for some considerable business and the numbers vs current guidance could benefit significantly.

From the Q4 conference call…

Charlie Giancarlo

"…Pure, for the first time, had a presence at the World Economic Forum in Davos this past January. The largest topics at Davos this year were the war over Ukraine, digital currencies and sustainability. Our participation focused on promoting the important role that Pure will play in reducing technologies and IT’s demand for energy and its production of e-waste.

Just prior to the event, the World Economic Forum produced a study stating that digital electronics of all types contribute 4% to 5% of all carbon emissions. Other studies identify that data center is used between 1% and 2% of all electrical power generated in the world. It is further estimated that data storage accounts for 20% to 25% of data center power usage, increasing to as much as 40% by the end of the decade. The vast majority of data centers over 80% remains trapped on magnetic hard disks. As we have stated for the past year, Pure’s flash-optimized systems generally use between 2x and 5x less power than competitive SSD-based systems and between 5x and 10x less power than the hard disk systems we replace.

Simple math then shows that replacing that 80% of hard disk storage and data centers with Pure’s flash-based storage can reduce total data center power utilization by approximately 20%. That same math shows that both data center space and e-waste would also be reduced by similar amounts with reduced labor costs and increased reliability as additional benefits. Reducing the world’s data center power, space and e-waste by 20% is a very significant reduction in the world of sustainability and needs to be recognized and amplified. This opportunity is resonating not only within the highly specialized field of IT data storage teams, but now also with the entire C-suite, including CIOs, CFOs and even CEOs.

While our product and technology leadership remains the primary reason by which customers select Pure, the competitive sustainability of our products continues to grow in importance with customers. In Q4, we saw more customers citing energy efficiency as a reason they chose Pure than in any previous quarter-to-date. Beyond just the environmental benefits we provide, customers are increasingly compelled by their ability to get more out of their storage at a lower total cost of ownership, given the backdrop of increasing energy prices.

This simple step of replacing hard disk with Pure’s flash-optimized storage has significant benefits to any organization but has been out of reach economically for the majority of secondary tier data because of the higher cost of solid-state flash technology compared to the lowest cost hard disk drives. Large, unstructured data repositories continue to be dominated by 7,200 RPM disks, despite their difficulty to manage, relatively low reliability and their substantial power, space and cooling needs, because superior all-flash systems were too expensive.

Well, I’m pleased to announce that our founders’ vision of the all-flash data center is finally here. And the days of hard disk dominance of data are coming to a close. Today, Pure announced FlashBlade//E, a scale-out, unstructured data repository built for large capacity data stores, which provides a lower total operating costs compared to secondary tier disk. FlashBlade//E will ship late this quarter.

FlashBlade//E will be priced under $0.20 per gigabyte at a system level and costs even less when measured on effective capacity. Let me repeat that. FlashBlade//E will be priced under $0.20 per gigabyte at a system level, inclusive of the first three years of subscription, which directly competes with lower performance all hard disk systems. And operating costs for FlashBlade//E are significantly lower than the hard disk systems that it will replaced with its 5x to 10x reduction in cost for power, space, cooling and labor.

FlashBlade//E, the second in our series of cost-optimized products after FlashArray//C opens up a massive new opportunity for us and allows us to expand further into our total available market. FlashBlade//E enables Pure to significantly penetrate many segments of the storage market currently dominated by disc, which has been inaccessible to Pure until now. FlashBlade//E is a perfect example of how we are investing our R&D dollars in a focused and strategic manner to maximize long-term growth opportunities in one of the world’s largest IT categories.

We are extremely excited about how this new product complements our innovative portfolio and strengthens our opportunity to drive Pure’s growth over the long-term…"

Kevan Krysler

“…We are very excited about the innovations we delivered in the year and are particularly excited about the introduction of FlashBlade//E, which will further fuel our ability to make the all-flash data center a reality, a benefit for both our customers and our planet.”

“…During Q4, we did not receive new product orders for Meta and this is reflected in our forecasted growth rate for next year. Also, our FY2024 revenue guidance assumes a modest ramp during the second half of the year from sales of our newest FlashBlade//E offering.”

" Operator

Thank you. Our next question comes from Simon Leopold from Raymond James. Your line is now open. Please go ahead.

Simon Leopold

Thanks for taking the question. I wanted to see if maybe you could address a longer-term opportunity around artificial intelligence and machine learning. I think that was sort of the root of the use case for Meta. And wondering, particularly with the introduction of the E platform for unstructured data, how we should think about that particular use case broadly affecting Pure Storage over the longer term? Thank you.

Rob Lee

Yes, Simon, this is Rob. I’ll take that one. So yes, I mean, certainly, as we’ve talked about in prior calls, the broader space of analytics and AI continues to be a strong one for us, certainly with Meta, but also the broader customer base. And then certainly, within the last several months, a lot of news in that space around new developments, generative AI technology, so on and so forth. Look, at the end of the day, we very much believe that this entire space of technology is extremely dependent on data, on very large corpuses of data. And if you step back and you look at where our customers are largely housing those sets of data today all together, way too often, it’s sitting in bulk repositories trapped on very, very inefficient pools of disk.

And this is precisely the opportunity that we developed FlashBlade//E to go and attack. And so I think, we feel long term that this is a – it does present a very significant the bulk data space overall, and certainly the focus of AI to go and capitalize on those sets of data presents a very significant opportunity for us. And we’re going to go pursue that very aggressively as we’re now really the only ones that are going to be able to take flash technology and go and modernize those environments.

Charlie Giancarlo

Yes, and Simon, I might add on to that. The listeners may recall that the Meta architecture was FlashBlade for the high performance side which was approximately in terms of bytes stored about 10% of the bytes stored. And then FlashArray//C for 90% of the bytes stored, which are let’s say in hot standby, the – with FlashBlade providing the high performance side of it and the FlashArray//C providing the warm storage for data that’s about to be processed.

Well, what we see with FlashBlade//E is the opportunity to expand that even further. So it’s a great architectural, we have to provide both performance and then lower cost for the warm tier. And so, we can get into an all flash environment as well as an all pure environment in these customers, rather than having to have a high performance tier that’s flash, and then a lower performance, lower cost tier that’s hard disk."

20 Likes

Join us at COMPUTEX for Supermicro CEO, Charles Liang’s keynote on advancing AI through green computing and Total IT Solutions. Jensen Huang, NVIDIA CEO, will join Charles to showcase Supermicro and NVIDIA’s latest innovations in data center infrastructure, accelerated computing, and more. Register now to attend in person at COMPUTEX Taipei or watch live online on Thursday, 6/1, or on-demand after the event. #Supermicro #NVIDIA #Computex #ComputexTaipei #AI #TotalITSolution #Innovate #Datacenter #GreenComputing No alternative text description for this image

From SMCI’s linkedin page.
Perhaps there will be a nice boost to visibility on their products with NVDA

12 Likes

I don’t imagine the pairing of these two CEOs/companies will hurt the share price momentum either.

Hopefully, I can watch this on youtube afterward. I can’t register for it live through Supermicro because they demand a “work email”… and won’t accept a gmail address. :frowning: Didn’t follow through with the Exhibition website.

Rob
He is no fool who gives what he cannot keep to gain what he cannot lose.

10 Likes

It is nice to see SMCI mentioned in a Wall Street Journal article.
The incredibly high demand for AI GPU/server hardware is real.

Server manufacturers and their direct customers say they are facing waits of more than six months to get Nvidia’s latest graphics chips. The CEO of Supermicro, one of the largest server-makers, said the company’s back-orders of systems featuring graphic chips was at its highest level ever and that the company was rushing to add manufacturing capacity.

Other interesting quotes from the article:

“GPUs at this point are considerably harder to get than drugs,” Elon Musk told The Wall Street Journal CEO Council Summit on May 23.
Being Musk has its perks, though. Earlier this year, startups clamoring for Oracle computing capacity were abruptly told that a buyer had snapped up much of Oracle’s spare server space, people familiar with the matter said. The buyer, the startups were told, was Musk, who is building his own OpenAI rival called X.AI, the people said.

It seems as AI models get more complex, ever more GPU power is needed.

UBS analysts estimate an earlier version of ChatGPT required about 10,000 graphic chips. Musk estimates that an updated version requires three to five times as many of Nvidia’s advanced processors.

People bulk buying their own GPU servers directly, would be a positive for SMCI.

Some investors are combing their networks for spare computing power while others are orchestrating bulk orders of processors and server capacity that can be shared across their AI startups. Startups are shrinking their AI models to make them more efficient, buying their own physical servers with relevant graphics chips or switching to less-popular cloud providers such as Oracle until the shortage is resolved, according to AI investors and startups.
Other founders are simply begging salespeople at Amazon and Microsoft for more power.

24 Likes