SNOW: March 7 JMP conference

Things that stood out to me on today’s SNOW conference with JMP.

  1. Margin expansion can continue if scale increases, or costs are reduced with cloud vendors. They want to work on renegotiating with AWS etc (which they have been doing/ successful so far). They also continue to avoid discounts for new customers.

“…the only way we’re going to continue to get margin expansion is either getting at scale and a lot of the deployments around the world where we’re not there yet. It’s going to be taking costs out or renegotiating with our cloud vendors, and then continuing to have better discipline on our discounting. And that’s something I’m super focused on because we talked about last week, we have new – we’re constantly enhancing our software. We don’t increase our price for existing customers. But as we land new customers, they shouldn’t be paying what those old customers got because they’re getting more value for what they’re doing.”

  1. Business is very strong. They booked 300 million more in contract value in Q4 alone than they had initially expected.

CFO: “We just had a record Q4. We booked over $1.2 billion in contract value in Q4 alone, $300 million more than what we were planning on doing.”

  1. An interesting response to why the CFO thinks their stock dropped. The CFO believes the market ‘forgot’ about a predicted headwind for FY2023 that was stated in a May 2021 conference call? Unless I’m not understanding what he’s stated. (I personally think, however, the afterhours fall was a knee jerk reaction to a big Q4 headline revenue miss on the ‘whisper number’ plus the lighter than expected guidance, which is of course attributed to the unexpected increase in product performance).

Analyst: “…yet the stock was down…in the aftermarket, it’s down like 22% or something like that. So tell – what was going on there? What were people so bent out of shape about? And what’s the right answer?”

CFO: "I reminded them that there’s going to be about $100 million headwind, which back in May of 2021 on our conference call, if you listen to, I actually told people, there was going to be this headwind in revenue in 2023 with the new AWS chip platform that they rolled out. Graviton2, they call it. So I think that’s what they didn’t like. And I don’t think people focused on that."

  1. More granular detail on how performance improvement leads to increased consumption in the long run.

CFO: "And as we become cheaper, people put more workloads into us. And as we become faster, there’s more workloads because of latency before that they wouldn’t run on Snowflake. And we have many customers that have told us, “If you can get your performance to this, we’ll move more of these workloads.” And so we know because we’ve been doing this for quite some time, there will be more workloads by customers moving into us. And as an example, many times when we do a big on-prem migration, you’re not shutting down the legacy system 100%. I think we have hundreds and hundreds of on-prem migrations that were in various stages. We’ve only – I think there’s only about 50 customers that had completely shut down their legacy system, whether that’s Teradata, Netezza or others.
Why? Because many customers just moved their most key workloads into Snowflake. And as we become cheaper, they’re willing to move other stuff to Snowflake. And it is a multiyear journey for most companies to do an on-prem migration. Why? Because they’re hard."

  1. Apparently, other hyperscalers have failed at migration out of legacy systems too - which is very interesting to see Snowflake has solved technical problems that GCP or MSFT couldn’t.

CFO: "I know GCP has struggled with a very large retailer for 2 years to do a Teradata migration and they’re failing miserably. And why I know this, too, many times, we only get involved after one of the hyperscalers failed. We have a big one that – Microsoft failed. They tried for a year and couldn’t do it, and we were successful…So we, with a partner, have written some pretty good tooling to be able to translate the proprietary language that the code is stored in…

  1. AWS and Azure partnerships are strong, GCP continues to be protective and tough to work with - as stated on prior calls but this is still something that stands out.

CFO: 83% or 82% of our revenue is associated with AWS… We actually have deals where AWS will throw dollars to us to help get the deal so that it doesn’t go to GCP. GCP is the one we co-sold 0, and they’re the most competitive with us…It’s funny, I had a call 3 weeks ago with the people at Google, and I pointed out to them there were 300 instances where you guys compete it to the very end with BigQuery and we won. And all of those customers ended up in AWS or Azure when they all could have been in Google if you would had just partnered with us and you would have been able to sell some of your AI or ML technologies around it.”


Jon Wayne, thanks so much for posting the Snowflake and Zscaler conference highlights. Your summaries were absolutely great and very useful and helpful, and they touched on things the managements hadn’t said anywhere before, at least as far as I remember.
Thanks again,


First I want to than Jon for bringing this to the board. This is immensely valuable information and you do an excellent job finding and sharing it.

Second, I wanted to attempt to explain a few things that are more technical in nature. One of the things is how I look at performance improvements and how that relates to other, more traditional systems. The other is more of an explanation as to what Teradata is and why it is brought up so much.

Databases, like any software, have updates over time. With a traditional on-prem database system (think SQL Server, Oracle, Postgres, MySQL), for a customer to get these updates, they must upgrade to the new version. This is a timely process that requires testing, downtime, risk & in some cases, additional cost (for example, the vendor of the software running on the database platform might require upgrading to a newer version of the software, which is a risk in and of itself). It is very common for a customer to run an old, even unsupported version (think security risk) due to this needed time & coordination. I still see new clients running systems that are 10-15+ years old!! With the pace of innovation these days… well, I’m getting off-topic. FOCUS FF, FOCUS.
With the advent of platform as a service (database as a service), innovation and performance improvements continue. I mean… it would kill the company if it did not because every competitor is improving. The hyperscalers have their database as a service offerings (AWS has AuroraDB versions built on open source databases MySQL & Postgres, Microsoft has Azure SQL Database built on these open source databases also but in addition, they also have SQL Server in there since they own that code base. Oracle… well, nobody likes Oracle. Haha. Anyway, Aurora and Azure SQL Database provide performance enhancements and customers rarely even notice; no real outage, no having to test, schedule, migrate etc… It happens automagically. Snowflake is doing the same thing. The biggest difference though is how Snowflake charges for their platform, which makes the upgrades MUCH MORE NOTICABLE. Obviously, as we are learning, investors notice when it’s a big improvement but think of the customers… They see that the workload cost WENT DOWN! This is likely a BIG reason why they get 100% satisfaction rating. With a traditional database, nobody will notice performance improvements, I mean we’re talking milliseconds. Not noticeable to the human eye (just like I never noticed my 14 year old going from a little guy to what is sitting in front of me right now). However, you can bet that whoever is paying the bills are noticing the lower cost and that its not staying a secret.
This will drive more STICKINESS. If you know they will continue to improve and lower the costs, why would you take the risk and cost of switching platforms for a cost savings that might not be there in 6 or 12 months?
Also, as stated on the call, this drives more customer adoption. Motivates customers to put more workloads and data on the systems. Over time, these additional workloads will more than make up for the $$ lost on the short-term improvements. It is also important to know that the more data a query has to run against, the more CPU it will take to run quickly. With data growth being an exponential thing… this is a very minor cost for MEGA growth in the future!!

The second thing I want to provide some information on is Teradata. We’ve seen & heard talk about Teradata but for the non-techies, what does this mean? Teradata is a MEGA warehouse system that has been around awhile. While it is “open source” it is still very expensive. As a result, it is mostly used in very large companies and of course, companies that have been around awhile. The only place I’ve been that used it was Salesforce. I’m an IT consultant and have helped hundreds of companies in 15-20 different industries. I know, still anecdotal but its not like I’ve only been in a handful of companies. From my VERY anecdotal observations of Teradata (just ONE company), it is cumbersome and complicated and requires very specialized staff. I think there are others here who have a better understanding of the nuts and bolts of it; perhaps Smorg or Muji? Anyway, like I said, these are generally of the largest data stores to pull data from. The fact that Snowflake has landed hundreds of customers to modernize that to from Teradata is a very big deal. It’s highly likely that each one of those is at least 1M ARR; possibly way more than 1M.
From the transcript, page 18 from CFO:
“As a reminder, we have landed hundreds of customers to do these big on-prem teradata migrations. I think we have only completed. We’re completely shutdown a little over 30 of those. It’s maybe in the mid-30s now.”


Good information, jon and FF.

CFO spoke about his top 2-3 priorities in the company now. First was the gross margin issue and trying to make their system more efficient by recreating a lot of the 3rd party tools internally since the costs of those 3rd party tools was going up. One thing I’m not following the CFO on is the issue of “discounting”. Here is what he said:

So really the only way we’re going to continue to get margin expansion is either getting at scale in a lot of the deployments around the world where we’re not there yet. It’s going to be taking costs out renegotiating with our cloud vendors and then continuing to have better discipline on our discounting. And that’s something I’m super focused on. Because we talked about last week, we have new – we’re constantly enhancing our software.

We don’t increase our price for our existing customers, but as we land new customers, they shouldn’t be paying what those old customer got, because they’re getting more value for what they’re doing. And you’re never going to increase. A lot of people are saying, well, why are you giving this price concession to your – it’s not a price concession to them. They’re just able to do more for less with what we have and I’m not going to go back and increase.

Is he talking about the “product improvements” and getting better at not improving by such leaps and bounds that they get these revenue headwinds to deal with? Or is he talking about giving discounts to win new customers? I’m kind of having a hard time understanding what he’s trying to say.

Thanks Jonwayne for the notes:

I just listened to the chat and wanted to add to what Jonwayne shared.

Here is the link, you can register and listen as well.

It’s 25 minutes long.

Notice the ease with which Mike Scarpelli is telling the story of Snowflake, what it is, what it enables for customers, and the huge amount of leadership/organizational work he has done over the last 3 years with the company.

There is a short discussion on Data Sharing and Mike is clear, deliberate, precise in telling us how the Clean Room technology enables new consumption for Snowflake and ease of use for customers.

There are two examples used:

First, he is describing the what Data Sharing is enabling:

Data Sharing has extremely good at security and governance.
We know exactly who is accessing what, and you can control which data gets access by whom. Data is never transferred; you’re just giving someone, and it could be someone within your company or different division or a completely separate company, the ability to access that data without ever transferring that data, and you know exactly who is accessing the data you can control the amount of time you want to have them accessing that data, and so that person can query that data directly in their system

Example 1:
Pat Walravens shared how his CIO (I assume of JMP Securities) contrasted the old way of doing things (example was working with 3rd party Merkle data I think with the data with the new Snowflake way of doing things (Clean Room approach)

It used to be, we would send them the file, they would have their own database, they would have to do a merger they would send it back, we would have to do ETL processes and that whole thing would take at least a day or two.

And now: Snowflake has the Merkle data in a Clean Zone where we can’t see it but we can join the two. We avoid ETL, we avoid extraction, we avoid the merging and it happens immediately instead of having to wait a day

Example 2:
Business between hypothetical Sporting Goods Retailer and Hulu.

Retailers don’t like to share their customer information. it’s very proprietary to them. Suppose there is a Sporting Goods Store, they have all their loyalty programs with all of their customers and so they know everyone’s email address. That retailer wants to go to Hulu to do direct advertising through a Clean Room. That Sports retailer can put in all of their customer loyalty information. Hulu knows all of their subscribers. Through Snowflake they can quickly see all of the overlapping customers. And that Sports retailer can have a targeted campaign specifically to all of the Hulu customers without anyone knowing any of the information of one another. That’s what Clean Room technology does, and that’s all true data sharing
… and there are no cookies, you get around all the GDPR.

Super cool indeed


Hi Buy Lower, you are asking if he’s talking about giving discounts to win new customers here:

We don’t increase our price for our existing customers, but as we land new customers, they shouldn’t be paying what those old customer got, because they’re getting more value for what they’re doing.

I read that very clearly as saying that while they don’t raise prices for existing customers as they add functionality, they charge new customers more because so many bells and whistles have been added.




You clearly have your Snowflake CFO decoder ring. :wink: I followed up with IR and got this response:

Today, we are more disciplined in the discounting that we offer to new customers than we were when landing flagship customers as a new company.

Good to see them able to keep the growth up without having to discount as much.