Pure Storage - Additional Evaluation

As I look over my notes on Pure Storage’s earnings report and conference call, I am amazed that it was initially off 10%, and that someone was willing to sell me his beloved shares this morning at $19.89 (down almost 9%), when I added to my position. Let’s look:

Key quarterly highlights:
• Record quarterly revenue: $338 million, up 48%. Now that 48% was on top of 52% gains the year before. The last four quarters have been up 31%, 38%, 41%, and now 48%. Is that a reason to sell?
• Record full year revenue: $1023 million, up 41%
• Record GAAP operating margin: -4.7%, up 14% from a year ago. Is that a reason to sell?
• Record Adj operating margin: 8.3%, up 10% from a year ago. Is that a reason to sell?
• Record Adj income per share of 13 cents, up from a loss of 2 cents a year ago. Is that a reason to sell?
• Record quarterly operating cash flow of $59 million and free cash flow of $38 million, and
• Record full-year operating cash flow of $73 million and free cash flow of $7.7 million. Note that last year their FCF was minus $61 million and this year it’s plus $7.7 million. Have you seen a reason to sell?

This quarter marks important milestones for Pure, surpassing $1 billion in annual sales and achieving adj profitability. Momentum in the business is strong. Approximately 500 customers joined Pure in the quarter, increasing the total to more than 4,500 organizations.

We delivered another outstanding quarter to finish the year, growing nearly 50% over the year-ago quarter. As we look ahead to next year, we look forward to achieving our first full-year of adjusted profitability.

Conference Call
We have achieved $1 billion in revenue in just 8 years since our founding, one of the fastest starts of any enterprise company in history, and we are just getting started.

For the quarter, revenue was $338 million, up 48%, and operating margin was 8.3%, both exceeding our high-end of our guidance. Not only did we have an exceptional first quarter of profitability, but we also achieved positive free cash flow for the full fiscal year.

We finished our fiscal year with more than 4,500 customers, up nearly 50% from a year ago.

In this quarter we saw an increase in win rate against all of our competition.

We finished with cash of $597 million, up $46 million sequentially.

We are now in the first half of our fiscal year, a period where we focus on making investments to drive velocity in the business. This is a consistent and deliberate strategy we have been following for several years now, to invest early in the year and reap the rewards in the seasonally stronger second half. Specifically, Q1 is marked by notable ramp up in our go to market hiring and our company kick-off, while Q2 is marked by the full quarter impacts of Q1 hiring as well as our Accelerate User Conference.

The partnership with NVIDIA started in the field which is generally where the best partnerships begin. We did a bunch of global tours with them across the last couple of quarters in joint marketing activities and we got some fantastic wins with joint engagements. We are continuing to work on formalizing the partnership and working jointly with their sales and field teams.

The traction with Cisco in the field is very strong, it continues to grow. The momentum with the product is significantly outpacing the converged and integrated systems markets. And we see that continuing to grow into the future.

We expect NVMe to grow very rapidly in our portfolio, eventually extending to most of it and part of our competitive advantages we were first, we have the performance of the underlying software and architecture to allow NVMe to be most effective and fast. And it’s been driving a lot of our growth in in customers. Frankly we are the only player that can provide the performance necessary to address their application environments. I will just add to that our approach has really been a software centric one.

AI use cases that you are seeing beyond so autonomous driving? What are some of the other AI related use cases that your product has been pulled into?
Lots of real-time analytics use cases, specifically security and threat detection internally. Think of Splunk on-prem leveraging our FlashBlade technology to really run through that. Lots of IOT types of applications as well, so there is a whole host of next generation applications. But I think the thing that I am most excited about is the progress of the rapid restore use case.

I think that you can see with 48% year-over-year growth in the overall data storage market we are picking up market share.

You’d have to be out of your mind to sell this company down 10% today. Just my way of looking at it.

Saul

For Knowledgebase for this board,
please go to Post #17774, 17775 and 17776.
We had to post it in three parts this time.

A link to the Knowledgebase is also at the top of the Announcements column
that is on the right side of every page on this board

54 Likes

Thanks for the great post Saul!

Can anyone explain what they mean by this…?

" But I think the thing that I am most excited about is the progress of the rapid restore use case."

What is rapid restore and how does it work?

Thanks!

Chris.

1 Like

Chris,

I am no expert in this industry, but from what Pure management specified is their rapid restore product is unique and unmatched in the industry. This comes from the FlashBlade product. That creates near instantaneous restoration of data when data is lost or a fault causes data to be lost. Thus users of the data would hardly notice at all that the data in the data center had gone down.

How often have we gone to a site only to find it down. It can days, but usually hours, of downtime to restore data in the data center. With PSTG, using FireBlade, it occurs almost instantaneously.

This is a second use case for FireBlade beyond just AI.

Whether or not management was exaggerating or spinning in a creative fashion, someone let me know. I have no reason to believe so, but this is a very competitive industry with a lot of marketing hype from all the major players.

The excitement is that there is no comparable product in the market for a disruptive looking product for a core element of back up. Nice! if as stated. My eyes opened up quickly when I heard management state this. It is data storage and yet PURE has a disruptive product for a core data function that no one can match!

Others please weigh in.

Tinker

5 Likes

https://blog.purestorage.com/billion-dollar-backup-industry-…

5 Likes

I don’t know about Pure rapid restore specifically, but I know that VMWare has a snapshot facility which does an intelligent image of the filesystem, by intelligent I mean that it only actually copies things which have changed since the last snapshot. The one problem with such systems is that they are unaware of the internal processing of a database where an image of what is on disk is not a complete image of the current state. There are techniques for putting the DB into an idle state, so that all transactions are resolved to disk, but that, of course, interrupts operations and so is far less magic than the idea seems.

1 Like

http://discussion.fool.com/i-personally-like-pstg-better-i-like-…

Some comments on Pure over on the Nutanix thread that probably would be better positioned on this thread…
Ant

First, the Pure blog that branmin links is misleading. Pure is always configured to use local compression AND deduplication. The Data Domain restore to which they were comparing had local compression turned off in order to enable better deduplication and thus a smaller backup size. I believe that if the Data Domain had been similarly configured as Pure it would have been almost as fast.

Note that being flash-based Pure is more expensive than disk-based solutions like Data Domain. When you add in that the local compression actually reduces overall compression ratio for that kind of data, the end result is that you’re spending a lot more with Pure to backup the same data. I suspect the advice to go for the smallest backup size on the Data Domain, to save money, is what resulted in the slower restore. A smart backup administrator would have understood these trade-offs. One of the comments below the blog that the blog links to supports this.

I note also that Pure side-steps the $/TB factor completely. Flashblade sounds great, but there’s a cost associated with it.

8 Likes

Smorg,

By raising the $ per terabyte issue, every potential customer will be asking that question. Therefore I do not believe that PSTG is ignoring that issue, it is simply an issue that is dynamic and will change over time and will change depending on alternatives. Otherwise PSTG would never have even broached the subject. It would have been plain stupid to do so.

Do you have any data to fill in for Data Domain as to restore speed, their 3 racks, or to the story using EMC as an example?

Otherwise it is all speculation. This entire industry is all speculation. I remember with ISRG when surgeons doing laparoscopic surgery were bad mouthing ISRG surgical robot.

The real story is in sales, and sales growth, and margins. That is how you can tell the B.S. from reality.

Sales for FlashBlade are still nascent. Expected to be $80 million for its first full year. So tough to tell, but clearly there are at least some use cases where it makes sense.

Do you have any data to dispute the claims made by PSTG in regard to restore speed issues? Clearly the back up speeds are fine. It is the restore speeds, and the number of racks it takes, and the complexity it takes, that are the issues that PSTG is raising in their blog.

Tbanks.

Tinker

2 Likes

Do you have any data to dispute the claims made by PSTG in regard to restore speed issues?

As I said, it’s cherry-picking a reasonably well known mis-application of dedupe technology. Here’s a related article: https://www.brentozar.com/archive/2009/11/why-dedupe-is-a-ba… , which concludes: In a nutshell, DBAs should use SQL Server backup compression because it makes for 80-90% faster backups and restores. When faced with backing up to a dedupe appliance, back up to a plain file share instead. Save the deduped storage space for servers that really need it – especially since dedupe storage is so expensive.

It’s also important to remember that backups are done daily; restores are much less common. Dedupe storage is often considered the backup of last resort. Companies will often have simple local backups that they keep for less than a week and would be used for most restore operations. It’s when a whole facility goes down, or something needs to be pulled out of the archives for legal reasons, etc. that the replicated dedupe backup is tapped. In those cases, getting new hardware setup on which to restore is often a gating factor as well.

This entire industry is all speculation. I remember with ISRG when surgeons doing laparoscopic surgery were bad mouthing ISRG surgical robot.

Storage is not medical, and I fail to see a valid analogy here. With all technologies, you have to choose the right technology for the task at hand.

There is nothing speculative about storage. It’s not uncommon for storage vendors to let companies try a system out for 30-days free, to see if it’ll work in their application. The problem has been getting companies to change their existing processes to support the new things. Typically, being able to plug into something like Commvault or NetBackup is key, but also a limiting factor. That’s how Data Domain was able to replace Tape. Backup admins were actually setting up things called “tapes” and such, but the backups went to a DD storage array.

Today, the world is different in that people want less specialization. If a dedupe array can look like a tape to backup software, why can’t that dedupe array just look like regular disk storage to a VM? Well, it can, but at a cost in both money and performance. Pure is pushing the boundaries on this, saying they have compression, backup/recovery, VM deployment, and other use cases all from the same easy to configure hardware. As I said in another thread, some of the smart Data Domain folks are at Pure now (some at other storage startups like Nimble). The question I think is whether Pure can find enough sweet spots that are ready to spend the additional $/TB that Pure’s solution obviously demands. If so, then there’s a possibility that they can ride the disruption curve and expand into the mainstream as their price/TB gets better and better with newer technology over time.

10 Likes

Not sure if this is a factor but most big data is stored offsite. Maybe in several locations other than the datacenter where it is being used.

State of Oregon has an offsite storage in Montana.

Bandwidth and backup and restore speeds are all factors in this equation.

As I said, it’s cherry-picking a reasonably well known mis-application of dedupe technology.

Just a bit more on this. The article I linked isn’t quite right, at least for Data Domain, which does inline dedupliction, but that doesn’t change their argument that the entire data has to be sent over the wire to the dedupe device.

The way dedupe works is logically simple: the system chunks data it receives into blocks, typically 4Kb - 12 Kb in size. Then a hash (like a SHA-1 hash) is performed on the block, and that hash is looked up in a big table. If that hash exists, then the system compares the data pointed to in the table with what’s coming in, and if it matches, then only a new pointer is stored, not the actual 4Kb-12Kb of data (since that’s a dupe of what was previously stored). Before Data Domain, what happened was that devices were storing all the data and then running a background operation to perform hashes and compares and change what was stored to be smaller. However, if that process didn’t complete before the next backup operation was started, the system could fall behind, or worse, run out of space. The inline deduplication thing was to perform the hash and compare as the data arrives, so that only the de-duped data would be stored on disk.

The addition of standard compression before deduping is a mixed bag. One of the first things students try is to compress an already compressed file with the same scheme. They’re often surprised that the result is a bigger file, but any good compression scheme will have that property. But mixing compression schemes can help in some cases. If you are backing up user’s home directories or email, then you’ll find that many users have the same files or emails (think of long cc lists). In those cases, compressing that data first will create the same, but smaller, blocks of data to be de-duped. That’s more processing, of course, but if storage price is more of an issue than CPU, it can be the right trade-off.

If the compression is done before the bytes are sent over the wire to the backup appliance, then the backup speed might be increased since some CPUs can be faster than some networks. That’s another potential win.

However, sometimes performing compression before deduplication hurts the ability of the dedupe to work. If the same data in different places can’t be chunked up the same, then it won’t be the same and so it can’t be deduped. This is apparently what happens with SQL Server backups. However, that data is ripe for compression, and so the better trade-off is to locally compress, send less data, but have a lower dedupe ratio. That may look bad on paper (especially if you’re trying to justify a dedupe device), but it’s the better overall solution. The Pure blog article is literally comparing its compression then dedupe system to a system configured for dedupe only and saying they’re better, when if they configured the other system to compress first, the results would be more similar (as one of the commenters did in the original posted link, btw).

To be clear, I’m not saying Pure isn’t a better solution for many things - it probably is. The question is whether its current advantages make it worth the extra money for a reasonable number of use cases. If so, then as its price drops (as most technology does), it’ll have a base on which to grow. But, if it can’t land enough use cases now, it won’t be around when prices drop. Anyone remember Grid computers and their laptops? Too far ahead of the price drop curve. Christensen’s famous excavator example shows how hydraulics only survived because they found a new market in Backhoes that enabled them to make money while they improved their technology to eventually take on the big boys. But, without Backhoes hydraulics would have failed.

9 Likes

Agree with smorgasbord…sounds like a solid product, but cost may be an issue to many clients. I did note that a Pure blog mentioned compatibilty with Commvault and Veeam (popular backup software), which is good.

So many IT companies have an end point security play, or a hyperconverged play, or a backup play. Right or wrong, IT decisionmakers tend to view certain companies as “best in breed” for specific things. HPE suffers from this a bit, as does the combined DellEMC as they have such a broad portfolio that it is hard for many of their lesser known solutions to gain market share.

Pure is known for flash…may be hard for clients to think of them for backup too. I will try and ask some engineers I know for their opinion though.

It is also key as to which client contact you are selling to… the legacy IT guy may view it purely as backup for emergencies and that they may hardly ever utilize that data again…so cost is key.

If you sell to Chief Data Officer or key analysts…or CMO…you may be able to sell them on quick restore benefits as they may care more about data mining and BI. NVIDIA is trying to sell to those same client contacts as an example.

3 Likes

Dreamer, how is it if what Pure is selling, or what Arista is selling, or what Nutanix is selling is so passes that they are growing revenues and customers so quickly? I mean enterprise and cloud titan scale customers are hardly stupid, and hardly like to waste money.

As Duma relayed what Denny said, hard to say who is telling the truth so follow the money.

I find it extremely difficult to believe that these companies are not creating new value in their products that the incumbents have not been able to match given how rapidly these products are growing and how much market share they are stealing.

is there any explanation for this incongruity?

That is the entire point of the threat that Saul started. We may not understand the technology as an insider would, but we can relate narrative with numbers to ascertain the business truth.

btw I used the ISRG analogy not be cause it is exact, but to show a patent example of surgeons, the ultimate insiders, and how they were the largest group of skeptics of the DaVinci despite how rapidly it was being adopted. They always had some excuse blaming it on dumb consumers, dumb hospital marketing people,etc. And yet the product just kept selling like hot cakes, even at more than $1 million per.

CDMA from QCOM was probably the most denigrated and belittled technology product in the history of all products. That is the primary reason it went up 26x in one year, once all the industry propaganda turned out to be false and the patent suits(that belied the belittling) fell away. Big money believed the insiders and the share price remained down, little outsiders saw through this and made perhaps the #1 stock score in one year in history.

Why should we ignore the numbers that PSTG is putting up?

Legitimate question.

Thanks.

Tinker

11 Likes

Why should we ignore the numbers that PSTG is putting up?

Obviously, we shouldn’t…and after removing some sod this afternoon and watching my Vols hopefully land an SEC basketball title (if SC can beat Auburn), I plan to use the numbers to date, extrapolate some future possible ranges, and decide how much back patting I should do about already having a >6% PSTG position and/or consider some 2020 PSTG options to add to shares position…although with a share price of about $21, more plain ole shares might well be a more proper answer (maybe expunging IRBT shares in the process?).

1 Like

It is also key as to which client contact you are selling to… the legacy IT guy may view it purely as backup for emergencies and that they may hardly ever utilize that data again…so cost is key.

If you sell to Chief Data Officer or key analysts…or CMO…you may be able to sell them on quick restore benefits as they may care more about data mining and BI.

It may be difficult for older people in a company to switch from the idea of big data never being used to big data being actively mined

My Kroger card may be an example of the former, the company knows everything about years worth of my shopping habits but so far seems to have made little use of it.

Cost is not always the key. I would bet most posting here chose a SSD for their last PC , not a HDD

3 Likes

Tinker, I believe you only still have 2 horses in your stable. Are you thinking of adding PSTG to the yard or have you already done so?

Adding horses?

TBD. Only ~ 20% of my money is in tax deferred account. I can easily swap out that 20% to add or rearrange as I please. May do so. Does not help that I ended up with 1.5 hours sleep (no exaggerating) earlier this week and ended up doing 2 as of trial against a truly distasteful individual (her Attorney and my client was no prized petunia either but in comparison, oy vey), so some of it is just getting the energy to want to do somsthing.

I do also have plenty of long term capital gains, but less likely to trade those out. And then there is monthly new money, which is of course easy to invest in whatever Is want to.

But I am strongly thinking, not so much that I need to add more horses, but perhaps faster horses. One can sense where momentum is swinging. Basically where the greater gap is from expectations to actual results. And PSTG and NTNX, are two examples of exactly that sense. Expectations/reality gap is widening more than say it once was with ANET or NVDA. At least for now.

But I have been thinking about it for a few weeks. Perhaps I will get around to it this week. Dang, takes days to recover from a week like this.

I’m gonna take a nice T-bone for early dinner, do a long dog walk, long bike, and think some more about it. I feel like the tree in Lord of The Rings II, when it takes a whole down just to say good morning…oh my. Another reason why doing nothing becomes so much easier these days. Lol

Tinker

3 Likes

Pure got to 1 b in rev in 8 years.

Says a lot.

Like tinker observed let revenue growth and other numbers do the talking.

I didnt say ignore the numbers. I also didnt bring uo arista or nutanix in my comment.

Specifically, i addressed the smaller “backuo and restore” business line. The bulk of their revenues do not come from this line.

Apologies if my comments or thoughts dont align with the echo chamber of your choice,

Just thought feedback from someone actually in the business of working with both storage companies and their commercial clients may be useful to others.

I took a small position in PSTG a week or two ago, and then again for a separate port when it was down about 7% for the day on friday, as it seemed an odd market reaction to thwir ER.

3 Likes

Mauser, lack of innovation in a company comes from culture and personalities not necessarily age.
As a very old person and the senior IT decision maker I had the most difficulty with young people (and people in general) trained on one vendor’s product. The worst were those from IBM and secondly those from Cisco.

You may recall that at one time IBM had probably 98% of the networking market and 0% now. It was not older people that caused this, it was myopic, bureaucratic people of all ages. Same inside of the IT user community. IT People inside of a company are not usually innovative people, they are usually “in the weeds people”. Companies grow and innovate when their strategic IT infrastructure investment decisions are made by people with vision (regardless of age) and they tend to diminish in stature and profitability when those decisions are left to those who are entrenched in “their vendors” regardless of age.

As you can see I am a cranky old man with many miles of IT wars.

13 Likes