Snowflake - My Thoughts

I wrote in post #84922 of this thread (…) why I sold Snowflake back in Feb at a modest loss. My key concern was that investors are ignoring the risks of a steep revenue decline. The ER released yesterday confirmed my concerns.

The red flags to me were as follows:

  1. Headwinds from the so-called pricing optimization / product “enhancements”

  2. the NRR of 170+ is unsustainable and will continue to fall

  3. weak customer net adds / stalling operating margin and dollars

  4. Product “enhancements”
    SNOW rolled out its warehouse scheduling service and the AWS Graviton2 chip; and quantified a 5% adverse net impact and 7+% gross impact to its FY23 revenue. Importantly, in the March 2022 Morgan Stanley conference, the CFO stated that such product enhancements are not a one-off thing: but to be repeated every year. Embedded in its business model is a 5-7% headwind to revenue every year - and that needs to be accounted for.

Now, why do they need to do that? Their spin is that this will encourage increased migration. The less charitable explanation is that they are forced by their customers to do it.

Generally, I always take management’s explanation with a pinch of salt: they are paid cheerleaders and they are always incented to put a positive spin on things.

Back to the topic: data volume is growing exponentially and in many cases, above the users’ own revenue growth and if price per unit remains the same, that is not sustainable. SNOW will gobble up more and more of the customers’ profits. That’s the “data tax” or “data bloat” phenomenon that people are concerned about.

This is made worse because:

(a) SNOW’s services are a large ticket item. Its ARPU is $275K as compared to Datadog (70K) and MDB (30K). People I know who uses snowflake have complained about a sticker shock when the bill comes.

(b) SNOW charges based on the compute and storage resources a user consume; and that doesn’t necessarily have a direct linkage with the customers’ revenue. Contrast that with MDB for example. MDB’s pricing is based on the resources consumed by an app - simplistically, the more revenue an app generates, the more resources it consumes and the more they pay MDB. that is a more sustainable pricing model. Read what MDB’s CEO said when asked on how their pricing model differs from Snowflake:

"I think customers’ view, thinking about the business or thinking about how to use MongoDB in terms of applications versus in terms of data. So, when they look at an application and build an application, if actually that application is not consuming resources, something has fundamentally gone wrong
because that’s actually a knock on the development team. So, they want to see good consumption. They want to see users using the application. They want to see data being generated. They want to see transactions being executed. So, I think that’s a very different phenomenon"

(Caveat: I’m not suggesting that MDB is a good buy now; I don’t hold MDB now as I have some other issues with it).

You can see how rapidly SNOW’s historical revenue growth decelerates:

FY 2020 >> 2021 >> 2022 >> 2023E
174% >> 124% >> 106% >> 70-75% (my estimate)

I believe it has at least 1 or 2 more years of rapid deceleration until it gets to around 50% revenue growth where the decline gets more gentle (roughly, that’s based on a sustainable 130 NRR and 20% customer growth rate).

This brings me to the next point: its NRR trend.

2. the NRR of 170+

It NRR is 174 now. Management stated that in the Jan Quarter ER that it will fall to 150 and now, in the April Quarter ER, the goalpost has further shifted down to 130.

I was expecting that. Quite simply, a 170 or even 150 NRR is not sustainable. Have you seen that in any other tech company? If something is too good to be true, it probably is.

NRR was above 170 because 2020 and 2021 saw rapid customer acquisition and because the ramp up takes a few months (data migration takes time, unlike buying modules at, say, CRWD, where the ramp is almost
immediate), it artificially inflates the NRR.

With the customer net adds falling precipitously, this will hit the NRR and revenue growth in coming quarters. Customer net adds have been negative for 3 consecutive quarters now; while net adds for large customers ($1 mil TTM cohort) has also turned negative in the latest quarter.

This is further obfuscated by the way SNOW calculates its NRR: it’s based on trailing 12 months (TTM) revenue, compared with the revenue 24 months ago - so it’s very much a lagging indicator (the standard way to calculate it is latest quarter vs. four quarters ago revenue).

If calculated in the standard way, I suspect SNOW’s NRR would already be much lower than 170 (based on how its current revenue and revenue guide is trending).

That’s not likely to change soon. Management has downplayed any suggestion that customer acquisition will return to its former pace; and instead focusing on “quality” customers.

4. Where to from here?

IMHO, the latest ER is quite bad. All the key metrics are pointing in the wrong direction: decelerating revenue growth, weak revenue guide, weak customer & large customer net adds. Last but not least, the operating profit dollars and margins have stalled - and that has gone on for a few quarters already.

While it’s partly macro, some of it is due to execution issues. For example, the CFO admitted that Europe is weak because of execution issues and not just macro.

For the reasons I listed above, its current growth rate is unsustainably high and needs to decelerate for a few more quarters at least.

I am willing to buy this stock but not at $115 (the AH price). There is still a lot of lingering love for Snowflake among investors because of its long term “story” and previous high growth, and that’s reflected in its still premium valuation.

Human psychology is such that it takes time and repeated disappointment for love to turn into disdain. Based on my own projections, many SaaS companies are entering the cheap zone; but not Snowflake. So I am still waiting.

[The usual disclaimer applies: do your own work. I am not providing any investment advice. I am just writing for the fun of it.]



Hi Cats. Wanted to provide my perspective on this line,
"Now, why do they need to do that? Their spin is that this will encourage increased migration. The less charitable explanation is that they are forced by their customers to do it. "

First, I’m not clear what you mean by “that”. I think you mean why would they make product enhancements and provide them at no extra cost. If I’m misunderstanding, let me know.

First, they MUST continue to innovate and improve the product. ALL databases do this. Take SQL Server for example. Every new release of SQL Server, 2000, 2005, 2008, 2008R2, 2012, 2014, 2016, 2017, 2019… EVERY ONE has performance improvements and MANY of them. Many, had SIGNIFICANT improvements (2005, 2012, 2016 especially). Microsoft did NOT up their price for SQL Server licenses due to the improvements. In addition, most major Enterprise customers are on a SaaSy type of annual payment plan, called Software Assurance. This allows them to upgrade to the newest versions without paying for the newer license over again.

WAY back in 2000, Oracle was the one that could handle big workloads. However, SQL Server then others, including MySQL eventually caught up. Remember, technology moves quickly, adoption does not. INNOVATION to improve the product MUST happen.

Now the question as to why not charge extra for it? In the database world, espEically the SaaS database world, NOBODY CHARGES HIGHER FEES FOR IMPROVEMENTS. This isn’t just a Snowflake thing. Again, looking at Microsoft, their Azure SQL Database is essentially SQL Server. They FIRST publish improvements there that then get packaged into the latest version or cumulative update that on-prem customers can install in their own environments. In Azure SQL Database, THEY DO NOT CHARGE MORE FOR IMPROVEMENTS.

I think the notion that perhaps they “should” came from a non-techie financial analyst on the previous call. Likely someone who does not understand how databases and the database world work.

Having said all that, in my opinion, and as I’ve stated many many times, the usage model they have for paying per query is BRILLIANT. A query is like math, there are a whole lot of different ways it can be done to get the right answer. Some ways are much more efficient than others. It is VERY VERY easy, even for top developers, to write very bad queries. Writing efficient queries is a specialized skill. To do it correctly, the developer must take into account where indexes are, data sizes and other things. One example, what if we want to pull ALL the data over the past 7 days from a table; maybe a purchasing table. Assume there is a date column with a proper index on it to query.
We need to compare the date column with the current date. In the first example below, we add 7 days to the column date and if it is greater than the current date, then returns the record. The second will return the same data set but instead subtracts 7 days from the current day.

dateadd(DateCol, dd, +7) > getdate()

DateCol > dateadd(getdate(), dd - 7)

There is a HUUUUUUUGE difference in how query engines will process these two. The first one will force the query engine to SCAN all the records in a table to find the results. The second one will use the index and only look at the data in the table it needs. The second on is MUCH more efficient.

Now what if this query serves a dashboard and the application requests the query every 4x a minute. Now what if there are 10K people looking at this dashboard across the world?

Now, what if we get a 10 million rows per day in this table? That first, poorly written query, will SCAN 10 million MORE NEW RECORDS EACH DAY! In a month, it’ll be scanning 300 MILLION more records on EACH EXECUTION of the query than it did the month before. And this will CONTINUE TO BUILD. Users & management may say that the dashboard, that was running well in the smaller set of data is now lagging. The fast & quick way to improve this in Snowflake is to throw more resources at it, meaning using more Snowflake. Developers are busy and management teams and developers don’t like to go back and re-do stuff that is already working.

There are MANY MANY MANY; almost an infinate way, to write a query that will not perform well. The brilliance to the SNOWFLAKE pricing model is they are monetizing the poorly written queries, I mean they all do but SNOW is making it much easier to use more hardware and by association, pay more and more over time.

Anyway, I rambled a bit too long here. Hopefully this helps understand the database world better and provides a better understanding in why Snowflake doesn’t charge more when they provide improvements.


Hi FinallyFooling,

Do those examples of enhancements and new releases from Microsoft / Oracle etc. result in a hit to revenue of 5-7% every year?



Yes, I understand the central part to your question is about the business model.

Its fair to prefer a business model that isn’t based on usage. Its fair to prefer more predictable short-term quarters. I don’t think it is fair to say Snowflake giving improvements away for free is anything outside of how the database world works. Being the ONLY vendor to charge for performance improvements would be a good way to hurt long-term growth in my opinion. On the call they mentioned MANY times in several different ways, they are focused on the long-term, not the short-term.

No, Microsoft/Oracle etc, do not take that 5-7% hit every year.

However, they also don’t get paid more when a query needs to use more resources, like Snowflake does. That’s why I laid out the second part of my rambling above. Sure, they’re gonna take a 5-7% hit this time (they stated the current one is the biggest enhancement they’ve ever had, so this one is not typical), In my opinion, Snowflake is monetizing their platform in a more shareholder-friendly manner than Microsoft or Oracle.


Well said, NRR 170+ is not sustainable for any SAAS company. It’s almost to good to be true. One positive takeaway from concall is that management has reiterated $10BN revenue guidance in year 2029. In fact, they have increased long term free cash flow margin guidance to 25%. Of course, we have to take all these long term guidance with pinch of salt, anything could happen in between.

Having said that I see moat around Snowflake product. In fact, I believe their product should ideally be used more by clients in bad time for optimizing various investment decision solely based on data. The assumption is that Snowflake product is best in the lot. Hopefully inflationary environment should prove how essential these products are for Snowflake’s clients success and hence stickiness/loyalty around these solutions.

Now market seems to be punishing new age cloud software companies irrespective of ER quality/long term investment thesis. All such companies are being painted with same brush and in a mode of “sell first and ask question later”. If I am not wrong, 2000 dotcom crash playbook is now being played against these companies. We got Amazon/Google/Netflix kind of companies after dust was settled. Now cloud SAAS companies like DDOG/CRWD/SNOW/MDB/ZS/NET are innovating tirelessly for their customers success and leaders in their respective segments, have very high gross margin and most importantly have earning visibility for next several quarters. So one could rightfully expect to be getting long term winners from these segments when dust gets settled in this time around. I may be wrong, just hoping for the best in this tough time:)




I agree with your post that SNOW’s earnings report was “quite bad” versus what I expected months ago. What I mean is, if SNOW traded at $300 yesterday? I would sell out immediately.

Where I disagree with you is this:

I am willing to buy this stock but not at $115 (the AH price). There is still a lot of lingering love for Snowflake among investors because of its long term “story” and previous high growth, and that’s reflected in its still premium valuation.

I’m curious, what price would actually trigger you to buy? What are you waiting for? A ridiculous forward EV/S of 10 or something?

At $115 premarket, or even $132 at close yesterday, I don’t see a reason to sell. Valuation cuts both ways. We should overscrutinize and nitpick as much as possible when a stock is overly expensive.
However when a stock is already cheap (in my opinion), I think it’s fair to give a lot more slack as long as a durable growth pathway remains intact.

At enterprise value of 36B (premarket) and projected to grow (conservatively) to 10B in revenue with 25% FCF while growing 30% YoY in calendar year 2028? Looks like a great opportunity to buy. Of course I am quite underwater on my initial purchases of SNOW but I don’t see other companies that can be a better investment (and I can only allocate so much to DDOG CRWD etc)

Slide 21:…


I’m not sure the assertion that Snowflake are capitalising on developers writing bad queries is reasonable.

  1. A big part of SNOW’s data warehouse value is “Performance at scale with no tuning”.
  2. SNOW repeatedly state “lower cost” to traditional data warehouses.
  3. Developers are unlikely to write data warehouse queries. I can’t imagine a scenario where a Database Administrator wouldn’t manage Snowflake, and tuning queries is basically what those people are optimal at. It’s not that difficult to do, given databases (including Snowflake) will tell you where the time/compute is being spent.

I think Snowflake don’t charge more because they are competing against AWS, Teradata, BiqQuery, Databricks, Oracle et al., and moving workloads to new data-warehouses is a significant and expensive operational effort, given you have to run your existing warehouse as well as Snowflake, and manage data migration between the two. Essentially, Snowflake has to be “better performing” and “lower cost” otherwise, why bother?




You may think that 10x EV/S is ridiculous; but hear it from Bill Gurley, a highly respected tech investor that Jeff Bezos admires (and retweeted recently):

See thread #3 of Gurley’s tweet: “valuation multiples are always a hack proxy. Dangerous to use. If you insist, 10X should be considered AMAZING and an upper limit. Over that silly.”

Interestingly, he thinks 10x is amazing and an upper limit while you think it’s ridiculously low. I didn’t post Gurley’s tweet on this board because I think his views on a 10x upper limit is way too rigid and will cause panic.

I suggest that people do a very simple 5-year model of the companies they invest in. Assume a conservative/reasonable rate of growth, operating margin, share dilution and apply a range of terminal multiples. It will be rough, inaccurate, and the range of outcomes will be wide; but it will help anchor your decision on whether the current price is within the realm of reasonableness. That exercise helped me avoid buying NET, SNOW etc. at their peaks last year.

For Snowflake, we have management’s 2029 target; but we will not know what the world looks like in 2029 … and we certainly won’t know whether it will be a Servicenow or a Teradata/Oracle by that time (which obviously determines the right terminal multiple). Given the pace of disruption in this space, one shouldn’t take for granted that it will be a Servicenow in 2029.

What I do know that its revenue growth has been, and is, decelerating rapidly at 20 pts a year; and from management’s comments on its NRR trend (from 170 to 150 and then to 130), that pace of deceleration will likely continue for another year or 2. So my entry price is very conservative until either the macro environment changes or if SNOW arrests that slide.

“Of course I am quite underwater on my initial purchases of SNOW but I don’t see other companies that can be a better investment (and I can only allocate so much to DDOG CRWD etc)”

I suggest the reason why you (and many others on this board) feel that way is because you only do hypergrowth. So people end up with a situation where they are not really comfortable with the hypergrowth space this year, but are forced to either hold cash or keep buying hypergrowth because there’s nothing else in their investment universe.


Catsunited, so then, is it actually forward 10x EV/S that you’re looking for in SNOW (which would be maybe $50-60 a share right now) or not?

A quick look at your previous posts indicate you did own SNOW before at “ridiculously high valuations”, and you did own NET at $100 a share at an even more crazy ridiculous valuation.
Scrolling a bit further I see you even owned AMPL, CRWD, MNDY etc etc and that’s when they were trading at probably much greater than forward 20x EV/S ratios.

Your actions simply don’t seem consistent with what you are trumpeting here. If you were just buying them as some kind of trade, I guess sure, but why even bother looking at these stocks in the first place that are way too lofty in price for your standards?

Not being antagonistic in any way but it just doesn’t make sense to me what you’re trying to point out with Bill Gurley’s quote, unless you actually disagree with him that forward 10x EV/S should not be an upper limit.


Hi jonwayne, your post did strike an antagonistic tone with me. I’m not sure how what catsunited did with SNOW, AMPL etc in the past is relevant to the investing thesis for SNOW now.

But be that as it may, what is your thesis for SNOW? Where does it differ from Catsuniteds? How could your thesis be wrong? What is the evidence that it’s right? Is it possible that the massive EV/S ratios of the past are a thing of the past, or will they bounce back if inflation comes down? And… will SNOW be a beneficiary?

SNOW does seem to have rapidly decelerating growth, and not a lot of evidence of operating leverage.

Definitely food for thought.



I suggest the reason why you (and many others on this board) feel that way is because you only do hypergrowth. So people end up with a situation where they are not really comfortable with the hypergrowth space this year, but are forced to either hold cash or keep buying hypergrowth because there’s nothing else in their investment universe.

I feel bitterness and resentment behind those words typed; I thought the core thesis of the board was to discuss hypergrowth stocks, so for me it is all I do and has been my investment universe. And I’ve been investing only in hypergrowth since 2014 done very well and will not abandon the strategy because of a 6 month, year, two year, three year downturn.

Of course I am quite underwater on my initial purchases of SNOW but I don’t see other companies that can be a better investment (and I can only allocate so much to DDOG CRWD etc)

Jon, I thought you had something like 50% of your entire portfolio in UPST in summer/fall 2021 (and wisely reduced your position greatly after hours of earning report in Nov, but we both made money buying in way before the run up), so add more to DDOG, CRWD, your first tier picks. No way any of those picks are approaching 30, 40, 50% of your portfolio



Who says you have to stick to a stock even when facts change or the macro environment changes?

I did post that I own NET at $100 early this year but traded in/out of it quickly when (2) it’s clear that the investing environment has changed for the worse; and (2) when I start to pay more attention to its zero profit strategy (which the CEO reiterated again on its latest ER).

$100 for NET may work under some reasonable assumptions according to my model - that’s why I bought it at that price early this year; but it certainly will not work under certain stagflationary / recessionary environment which the market starts to take seriously, as the year progresses. That’s why it’s at $50+ now.

As for the other stocks like MNDY and AMPL - it’s the same. I bought those earlier but that was in a different environment. I am long out of those because company-specific facts and the investing/rates environment have changed; and getting out of it quickly allowed me to avoid all these painful losses. There’s no rule that i must stick with them through hell and high water right?

By the way, YOU bought NET at $140+ range according to your previous post. Are you saying that, given a chance to dial back the clock, you still won’t change your mind and will still buy the stock at $140+ even though it’s at $50+ now? and you would gladly incur this 65% loss in the name of “consistency”?

That just sounds mind boggling to me.

And by the way, if I think the facts warrant it again, I will be back in NET at $50,60 or even $100. Why is that a problem?

Even Saul changes his mind on a dime and he has also said he’s not very consistent. Do you have a problem with that?



Sorry if it reads that way but in no way am I saying the only or even biggest reason to invest in $SNOW is due to poorly performing queries. There is a plethora and a bigger reason is the EXPLOSION of data that will keep going. This data EXPLOSION will continue and cause queries to need to use more and more SNOWFLAKE.

For your item number 3. I don’t really want to get too far into the details so I’m trying to be as general so that non-techies can understand as well. I’ve got a lot of stories for this.

While I believe all or even most queries should be written or even approved by a DBA, that is not how it works at most places. I am continually SHOCKED at the stuff going on under the covers in software. You’d think software sold at big mega companies like Infor or Epic would be finely tuned machines. Unfortunately the reality is very few applications are written in an efficient manner. In fact, one of the Infor products one of my larger clients uses is the worst designed application I’ve ever seen. It so bad that there is a sub-industry of companies that specialize in improving the performance of this specific application. I used to present at tech events and my favorite one was centered around how poorly designed & written many vendor applications really are. Its to the point where I’m SHOCKED when there is a good one.

One reason I love the SaaS model is it forces these companies to deal with their own coding issues if they want to improve their internal performance.

Second, DBA is a pretty general term nowadays. Not quite as general as like RN or IT professional. Very few DBAs are experts in tuning. This is one of my best skills. I interview and assess the skills of others. You’d be SHOCKED that there are even senior level DBAs that do not know how to write a query or even to do a single join. Anecdotal I know, but one guy I was interviewing opened up a GUI QUERY BUILDER when I asked him to write a simple query. For a SENIOR level position.

Believe me… many data scientists can barely spell SQL. They’re great at Python and the data scientist centric tools. SQL… a lot to be desired.

Getting into the weeds here a bit. My overall point with the poorly written queries is to not overlook this as a non insignificant piece to the SNOW model’s advantages.

Sure, during migration, its likely a parallel run before the confidence is there to turn off the old system. Snowflake needs to be faster and cheaper to spur more movement, yup, that’s what they say on their calls; these improvements spur more workloads to be moved. However, charging extra for perf improvements is not industry standard. My Azure SQL Database charge does not go up when they push new features or perf improvements. Heck, I usually don’t even know they did it. There is a case where it was in the old model ONLY IF companies were not paying for the support features (so if they’re running on a 2005 version, they’d have to pay for the 2012 version at full price to get those features). However, most companies paid for the support features and that includes upgrades to the most recent version(s). Note though, even in this scenario, the price for the 2012 version was the same as the price for the 2005 version.


I’m not sure what Gurley’s investing style is or Bezos. We do know Buffet invested in SNOW at a P/S NTM multiple around 70.

Have you read the KB? I know its OT but I’d love for you to provide how a company like SNOW should be valued and how it would fit in the 10x multiple it seems you’re claiming.

Growing > 80% YoY
FCF 16% this year, just upped guidance for this
DBNRR > 170, likely to drop but still > 150…
Gross Margin 75%

How do you handle this in your valuation calculation and how does it then come to be under 10?


The brilliance to the SNOWFLAKE pricing model is they are monetizing the poorly written queries

I use Snowflake. None of this is accurate.
Be careful what you read on the internet.
Moreover, screwing your customer is never a good long term strategy.


I know what I’ve said is not 100% accurate all the time from a tech standpoint. I’m trying to summarize it without getting too far into the weeds because of the audience. In reality, it is very complicated. Exciting for a data geek like me but for most here, not very useful.

Snowflake charges based on CPU consumed. If a query is written poorly, it takes more CPU to run the query. The poorly written query costs more in CPU, credits and therefore dollars.

Sure, if the warehouse is already up and running and going to be for a while, the impact of the poorly written query on dollars isn’t as big… If the warehouse is running just for my set of queries or my poorly written queries are extending the uptime of the warehouse then it is directly impacting dollars.

Two more scenarios the poorly written query directly impacts dollars:
If I know I’m about to run a heavy workload, I can up the resources then lower the resources when it’s done.

I could have a separate warehouse over my database I use just for the heavier queries.

I don’t consider this screwing over customers. In fact it’s the opposite. In the’old world’, you would need the capacity to handle the heaviest workloads efficiently and pay for that even for the lighter workloads.


This will be my last post on these technical details on this thread as we are WAY off-topic. Please feel free to email me (select the email on the reply and not to post to the board) if you’d like to discuss the technical details more. I’d love to talk about that too.

Whether Snowflake is TRYING to make money off bad queries or not doesn’t matter. I don’t know their intent and am not trying to say that is their intent. However, the fact is they DO. If someone writes bad code, it will cost more $$. That’s why query tuning is a thing to help customers save money in Snowflake. But don’t take my word for it.

There are a plethora of places to find more information about this. A google search provided a whole lot more than I can put on but here are some examples, including one from Snowflake’s own community site. The SNOWFLAKE COMMUNITY SITE says,

The number one issue driving costs in a Snowflake deployment is poorly written code! Resist the tendency to just increase the power (and therefore the cost) and focus some time on improving your SQL scripts.…
With compute capacity paid on a consumption basis and most legacy performance tuning now handled by the service, increasing performance is often as simple as spending a bit more money. But as easy as Snowflake makes it to boost performance, there are some tweaks you can make to ensure you are getting the best bang for your Snowflake buck.

FF here - if you read this article more, it gets into the query tuning details to SAVE MONEY. Also note, numbers 2 thru 7 are SQL specific, not just Snowflake query tuning items. These tips work on most SQL databases; columnar or tabular. This is the same theme thru most of these.

Also, the bolded part above is what I started off saying that this pricing model is BRILLIANT!! It makes this ‘perf increase’ easy by spending a little more instead of the longer task of fully tuning the query.…

Efficient script writing lowers Snowflake compute costs

Poorly written code can cause severe bottlenecks in any database or warehouse and drive up costs. It’s worth your effort to write efficient SQL code rather than adding more and more compute.…
SQL coding

The number one issue driving costs in a Snowflake deployment is poorly written code! Resist the tendency to just increase the power (and therefore the cost) and focus some time on improving your SQL scripts.

  1. Use ANSI Joins because they are better for the optimizer
    again, a SQL related tuning, not SNOWFLAKE only…
Factors that Impact Snowflake Costs
Before we see how you can achieve Snowflake cost optimization, let us first take a look at some of the key factors that add to costs.

Resource-Intensive Queries

When users fire complex queries, costs are bound to escalate. For instance, if you make your data available to 1000 users and each of these users submit 10 ad hoc queries a day to the data warehouse, and the query they fire has multiple Joins or Group Bys and scans through billions or even trillions of rows at query time. In this case, every query is resource-intensive, and your register keeps ticking with each query fired to the data warehouse.

There are many many more…


In the’old world’, you would need the capacity to handle the heaviest workloads efficiently and pay for that even for the lighter workloads.

This part is still the big true thing.

And it’s a really big deal. Sizing a server for an onsite Data Warehouse is a budget nightmare.


The CFO could have been describing the 5-7% headwind each year against an NRR of 170. So yes, you won’t get the full benefit of the stated NRR due to the optimization headwinds. But Snowflake’s NRR has also been much higher than peers and seems like it will settle at a ratio higher than peers (well above 130 for many years according to CFO).

Hard to make any conclusions without more context.

For me, they are delivering on everything they’ve stated. Never cut guidance, always beaten estimates, and FCF margin rapidly improving. Why would I sell just because the stock is down?