Snowflake Q2 Earnings call

Just posted this on Twitter but putting here for everyone’s convenience.

Just finished listening to the $SNOW call while reading the transcript.

Outside of the raw numbers, this is a VERY good report.

What I LOVED!!
Consumption: Optimizations, while always on-going, have essentially bottomed.
*Snowpark + 70% QoQ

Sales channels: customers re-engaged
*early renewals
*discussions with customers more around new projects, migrations, use cases, LLMs instead of optimizations

when will AI spend more clearly hit the software layer?

I think it’s going to be next year. As I said, it’s going to take some time for AI. And people are still struggling to get GPUs. And there is a time lag between when a chip manufacturer sells their chips to it gets built into the hardware that actually gets deployed in a rack in a data center, and it gets deployed to customers.
I think you will see the leading actually happening in months to come. But the material impact, I think most analysts out there are seeing in 2024. And we tend to agree with that.
And I would say in my prior life, when we were buying racks of servers, there’s a six-month delay between when we bought them and when they were actually going into production. And I don’t see that any different with GPUs.

LOTS of big things coming end of this year. Remember when I wrote about the potential of CONTAINER SERVICES? Makes Snowflake potentially a HYPERSCALER built on HYPERSCALERS!! Discussion around this.

Once Container Services become prime time, and that is part of Snowpark. Obviously, that means any workload becomes fair game to be deployed on Snowflake. This is obviously running close to the data inside our governance perimeter. You know, it’s essentially virtualization, you know, for the cloud.

So, we think there is enormous upside, you know, for us, you know, once those services become generally available across all our cloud platforms.

Container Services was the absolute hit, star performer at our Snowflake Summit conference. I mean, customers were just mesmerized by the possibilities that a platform capability has because we essentially eliminated any limitation on deployment on Snowflake. And why do you care? I mean the thing is, first of all, you wanted to deploy close to the data for all the reasons that we talk about. This enabled us.

fully trusted sanction platform where you can deploy applications without any further questions. And one of the challenges that you have in cloud computing is, who’s managing this, right? I mean what is the safe space to deploy into, and who is really guaranteeing the high trust enterprise-grade capabilities of the platform? So, we are bringing that. So, we’re going to see a lot of services. A lot of them could be on-premise legacy engines that are going to be containerized and reserviced as a cloud service, right? So, a lot of things that were old will be new again.

it’s virtualization for the cloud and having secure, safe, high-performance, very, very efficient spaces to run services and applications. And so, the sky is the limit on this capability. And, you know, we and our customers and our partners could not be, you know, more excited about the potentials and the possibilities here. But specific to AI, this matters a whole lot because the containers, our vehicle, our vessel, if you will, to deploy large language models, there is no limits on which models and how many models and for what segments of the business, you know, we can deploy.

we can shift gears very, very quickly. And we have incredible flexibility in terms of deploying these capabilities, because you’re going to see a lot of change and a lot of movements. We’ve already seen an enormous amount. That’s going to continue.


I think everybody is likely aware of the year NVDA is having. It feels like it is setting the world on fire. I thought it was interesting that, during the NVDA post-earnings conference call on Wednesday, NVDA mentioned SNOW a few times:

NDVA CFO Collette Kress: "Enterprises are also racing to deploy generative AI, driving strong consumption of NVIDIA powered instances in the cloud as well as demand for on-premise infrastructure. Whether we serve customers in the cloud or on-prem through partners or direct, their applications can run seamlessly on NVIDIA AI enterprise software with access to our acceleration libraries, pre-trained models and APIs.

We announced a partnership with Snowflake to provide enterprises with accelerated path to create customized generative AI applications using their own proprietary data, all securely within the Snowflake Data Cloud. With the NVIDIA NeMo platform for developing large language models, enterprises will be able to make custom LLMs for advanced AI services, including chatbot, search and summarization, right from the Snowflake Data Cloud."

NVDA CEO Huang: "L40S’ focus is to be able to fine-tune models, fine-tune pretrained models, and it’ll do that incredibly well. It has a transform engine. It’s got a lot of performance. You can get multiple GPUs in a server. It’s designed for hyperscale scale-out, meaning it’s easy to install L40S servers into the world’s hyperscale data centers. It comes in a standard rack, standard server, and everything about it is standard and so it’s easy to install.

L40S also is with the software stack around it and along with BlueField-3 and all the work that we did with VMware and the work that we did with Snowflakes and ServiceNow and so many other enterprise partners. L40S is designed for the world’s enterprise IT systems. And that’s the reason why HPE, Dell, and Lenovo and some 20 other system makers building about 100 different configurations of enterprise servers are going to work with us to take generative AI to the world’s enterprise."

NVDA CEO Huang: We’re extending NVIDIA AI to the world’s enterprises that demand generative AI but with the model privacy, security and sovereignty. Together with the world’s leading enterprise IT companies, Accenture, Adobe, Getty, Hugging Face, Snowflake, ServiceNow, VMware and WPP and our enterprise system partners, Dell, HPE, and Lenovo, we are bringing generative AI to the world’s enterprise."

Sounds like a ringing endorsement of SNOW by one of the top companies in the world right now.