Cloudflare 2Q23 Earnings

How did Cloudflare do versus my expectation? (Laid out here: Ben’s Portfolio update end of July 2023)

  • Reporting Fiscal Q2 2023 on 8/3/23.
  • Revenue expectation: $307M (6% QoQ, 31% YoY), hoping that the beat will improve from 0% last Q to 0.7% this Q.

—> They managed $308.5M (6.3% QoQ, 31.5% YoY), exceeding my expectation.

  • Q3 new revenue guide: $325M (6% QoQ, 28% YoY) which I would interpret as $328M (7% QoQ, 29% YoY) expecting 1% more QoQ growth as previous Q.

—> They guided for $330.5M (7.1% QoQ, 30% YoY), exceeding my expectation.

  • I would like to get an update on three topics: a) Sales/GTM issues b) their “$5B in 5 years” goal c) their AI angle.

—> have only listened to part of the call so far, but regarding

a) “Our focus on go-to-market improvements is already paying off. (…) our improved execution led to a record quarter in new ACV bookings. (…) As we discussed last quarter, we made significant changes in our sales team to proactively address underperformance. That went very well, both qualitatively and quantitatively. Our top performers are invigorated. We saw a marked improvement in the average account executive productivity. At the same time, we’ve implemented robust onboarding, enablement, and training programs. Combined with the record number of applicants we’re seeing for sales roles, this makes for the right formula to build a world-class sales organization. And our team is armed with great products to sell.” This sounds very good and optimistic which I think is justified by the numbers they just delivered.

c)“Our innovation engine remains in high gear, and by our estimates, Cloudflare is the most commonly used cloud provider across leading AI startups. In the second quarter alone, we shared ten major announcements and features to extend Cloudflare Workers as the preeminent development platform built for the age of AI. We believe we’re uniquely positioned to become a leader in AI inferencing and have a lot more in store across the entire AI lifecycle to help enable companies to build the future.

  • I would like to see clear signs that revenue growth will start re-accelerating again in 2H.

—> The Q3 revenue guide clearly indicates this - great!

  • I would like to see large customer QoQ re-acceleration.

—> large customer growth jumped to an amazing +9.1% QoQ growth, up from 5.6% last Q (196 new large customers up from 114 last Q).

  • I would like to see continued, good profitability margin progress.

—> looking good: operating income margin stayed at 6.6%. Net income margin jumped to a record 11%, up from 0% last Q2 and 9% last Q. FCF margin went to 6%, up from -2% last Q2 and 5% last Q. - check

  • I would like to see RPO QoQ growth re-accelerate.

—> RPO growth jumped from 5.7% QoQ last quarter to 8% QoQ this quarter. -check.

overall, I would say well done, Cloudflare!

Ben

84 Likes

I’ve been out of $NET for a quarter now. Going over their ER. One thing I REALLY like.

I equate Channel Partner revenue to SASE revenue (at the least, I think its close). QoQ ACCELERATED for the first time since last year’s Q2 report. Could this be indicative for SASE results this quarter?

$NET as a SASE play is still not great as this segment only makes up 15.28% of total revenue. I’d like to see it much closer to 50% to make it a viable SASE stock. Long $ZS

15 Likes

Workers and R2 from their next generation platform are performing incredibly well. From what they said on the call it sounds like newer AI applications are defaulting to using a Workers and R2 combination for efficiency and price.

Our developer platform, Cloudflare Workers, continues its explosive growth. We reached 10 million active workers applications in Q2, up 250% since December and 490% year-over-year. R2 continues to grow and now stores over 13 petabytes of customer data, up 85% quarter-over-quarter. We have 44,000 distinct paying customers with R2 subscriptions, and brand-name customers are beginning to adopt it as their primary object storage solution.

28 Likes

R2 continues to grow and now stores over 13 petabytes of customer data, up 85% quarter-over-quarter. We have 44,000 distinct paying customers with R2 subscriptions

As companies explore more and more AI applications, bills for storage fee is one cost (along with egress fee, like AWS S3) that will never stop going up. For archival purposes, one can almost never say “sorry, let’s throw away the data that’s more than 5 year old” because the data scientists will scream “why won’t you keep it? It gives me better models!” And each time the data goes out from storage service and gets used in training a model, you get charged egress fee proportional to the size of data and the size of model architecture.

So my prediction was that R2 will slowly become the best place among cloud providers to store data for machine learning/AI mode training – unless other cloud providers drop the egress fee, which is unlikely. And the size of data stored in R2 will go up proportionally to a company’s AI/ML maturity for each of their clients.

On a side note, now you can use R2 as the backend for both Snowflake and Databricks:

Edit: only Snowflake for now. The Databricks announcement is for Data Share. But I predict Databricks Delta Lake collaboration will come soon.

This is great news for data science minded folks. And this type of adoption will take at least a year to gain traction. So I can see this start becoming a meaningful portion of their revenue in a year or so.

I continue to be bullish on Cloudflare at 15.6% of my portfolio. Only sold 1% of my shares since Jan 2023 to put into other places.

35 Likes

Thanks, @chang88

A place where I’m confused on the tech is whether the offerings of a company like Pure Storage are complementary or in competition with Cloudflare’s R2 and similar storage solutions.

Some of these things seem to work on top of each other and some go head to head. I own both NET and PSTG. Do I own competitors?

Thanks!
JabbokRiver

9 Likes

@JabbokRiver42 PureStorage is the hardware - data can be stored in flash arrays or spinning disks. Cloudflare’s R2 (or AWS S3 etc) is the cloud software layer that abstracts away what hardware the data is stored on.

24 Likes

Maybe I’m confusing things but from what I read in the transcript, what’s being stored in the edge on R2 is what’s needed for Inference. They talked a lot about Inference and were excited by that. But made it clear the models were being trained elsewhere and not on the edge.

10 Likes

There are 2 separate directions to this that you are intertwining …

  1. A number of companies are now leveraging R2 as a cloud-neutral location (including “brand names” per mgmt). This covers all kinds of needs, but mgmt continues to highlight how R2 is being used by AI startups to store training data, which then gives them flexibility in renting GPUs from different clouds/regions for LLM training.

  2. Running AI inference at the edge is an emerging use case for Cloudflare Workers and their global edge network. This is bolstered by their new Constellation product as well as how mgmt hinted at GPUs at the edge [which we need to hear more details on]. I think this feeds into their data & streaming aspirations with Workers, esp Queues and PubSub.

-muji

19 Likes

Here is an answer from the CEO on the earnings call on those 2 directions (which some highlights in bold):

In the AI space, in particular, I think that, again, it’s such a new space that I don’t know that we’re displacing people as much as we’re just helping AI companies get what they need. And the two big areas around that are first around training where GPU scarcity is significant and the cost with the traditional hyperscale public clouds and moving data to wherever there’s cheap GPU capacity or even available GPU capacity, makes it cost prohibitive. And so R2, because we don’t charge for egress, has been just a real boon for a lot of AI companies to be able to adopt wherever they can find the cheapest GPU at any moment in time. And that again, it’s an area where a lot of that growth has come from.

And then increasingly, we think that the inference market is really going to be fought between two areas. One is going to be on your device itself. If you have a driverless car, you don’t want when a ball is bouncing down the street and the kid is chasing after it, for that decision on whether or not to put on the brakes to have to go out to the network now. You want that to live in the car itself. And so a lot of inference and models can be run on devices. But we think if they’re not a run on devices, if they have – if they’re too large, if they need too much capacity, from either a GPU or memory or network access space, in those cases, it’s going to make sense to run it in the network itself. And in that sense, Cloudflare is uniquely positioned to win in that inference market for those models that make sense not to run on the device themselves, the more complicated model that makes sense to run in – out at the edge of the network. And that’s exactly what we’re starting to see from more and more of these really innovative AI startups.

-muji

20 Likes

At the risk of getting too technical, I’m struggling to understand the ROI or value-add for doing AI inference “at the edge.”

As Prince says, there are cases where you’ll run AI inference on the end device itself - not just driverless cars, but Apple does face recognition on the phone for privacy reasons. But, then if you’re going to run it in the cloud because it’s a “complicated model,” why run that at the edge instead of on more powerful central computers?

The original reason for edge computing was to reduce latency, which is what drives continued use of CDNs. But, simply returning an image is a lot different than computing an AI inference, and so the 80ms or so potentially saved with an image get might be big (especially if the web page has lots of images), but with an AI inference compute, that one-time additional latency might not matter much. And even there, AWS Local Zones can provide under 10ms latency within metro regions (fun fact, you can test AWS Local Zone latency here).

The more complex your model, the more you’ll want to run it on a central server, not a smaller edge server. OTOH, the simpler your model, the more likely you can just run it on the end device itself. Edge Computing today is becoming like plug-in hybrid cars - an interesting idea at first, but technology advances are making it ideal for fewer and fewer use cases.

15 Likes

Just a one-liner to say that the discussion of the edge by highly knowledgeable tech insiders here is very interesting and valuable to non-techie tech investors like myself :slight_smile: Inter alia, it allows me/us to see how fluid the realities of cutting edge tech are. I am sure many others appreciate the insights, so don’t worry about getting too deep and keep 'em coming!

13 Likes

As I understood the earnings call transcript, as far as running inference on the local device or in the cloud in terms of performance, I don’t see much of a difference between a cloud data center at “the edge” and some other data center further away. I see two cases: run on local device or run in cloud and then whatever the performance tradeoffs are between those two cases (memory, compute, network latency). Prince mentions that this scenario (local vs cloud) is the less big deal, in his view, relative to another scenario involving privacy.

He says this in response to a question asking for more explanation of the inference use case. I have no idea if this warrants being a bigger deal as I would have to trust (or not) in management’s comments.

“The larger one, which again doesn’t feel like it’s a bigger deal, but we’re already seeing it play out some of the regulatory efforts that are happening around the world is that a lot of times for these inference tasks, the data that there is very private. People and governments want that to stay as close to the actual end user as possible.
So, we’ve already seen action in Italy that has restricted the use of certain AI tools because it sends data out of the country. What Cloudflare can uniquely do because we’re positioned across more than 250 cities worldwide, we are in the vast majority of countries worldwide is that we can actually process that information locally. So, again, we think that on device we are very close to where the user is on Cloudflare’s network, is going to be the place where inference going to take place.”

4 Likes

Smorg,

Absolutely the most complex (largest) models will run in the cloud. The big LLM engines are hosted services from the hyperscalers themselves or an emerging batch of providers (OpenAI, Anthropic, Cohere, Scale, et al). These are large and typically take several seconds to generate a result that then streams back to you.

However, there also exists a number of open-source alternatives to those big LLM services that allow companies to create and finetune their own AI with the same power as GPT-3.5 and approaching GPT-4, such as Meta’s Llama 2 or MosaicML (acquired by Databricks). Open-source models are starting to focus heavily on shrinking these models, to get these smaller models equal in performance to the large ones, but for way less compute plus having way more control over it. With Llama 2 and others, these AIs come in a variety of sizes, with small models that fit on a smartphone, to medium size, to very large ChatGPT-sized ones.

The medium size above is potentially where Cloudflare could step in. They could be hosted on the edge network, allowing these models to be within 50ms of the globe, delivering AI results quickly to the calling side. Recall that their edge network sits between clouds, on-prem networks, apps, and a global user base calling those apps. This means Cloudflare could host AIs closer to apps calling in, and/or closer to users calling in.

I don’t think the benefit is on the app side… . apps are typically already cloud-hosted, and so can be deployed very near the cloud environments hosting those large LLMs (say, in Azure to be nearest to Open AI).

The benefit is on THE END USER SIDE of their globally distributed network. I thought mgmt was pretty clear about how they see AI inference at the edge – it is for those models that are too big for end devices that can then be placed one hop above that device on the network edge. Think about the variety of end devices out there generating internet traffic or data… laptops, tablets, smartphones, vehicles, IoT devices, sensors. That middle size of AI is perfect for putting at the edge.

This is NOT for autonomous vehicles, which will need on-board GPUs to be constantly making instant real-time life or death decisions. This is for AI engine that can sit atop Workers apps and the data being collected by Workers apps and provide AI decisions very near the end-user or device.

There are plenty of use cases that are not appropriate for on-device. These could be AIs that work over collective data in a region (such as data from groups of devices or apps being used) or over network traffic/routing for content. For lower-end devices with little compute power (such as in IoT), this is about making dumb devices smart. Apps could be able to run AI over a lot of collective user data being collected, say, for fleet tracking or routing decisions.

But mgmt especially highlighted the data sovereignty use case – you can be hosting AI models without having the data leave your region. Yes, this could potentially be done manually via Local Zones (mini regional clouds), which brings stronger compute to metro areas – but those come with a huge amount of deployment complexity, and way less compute power (Local Zones don’t have the all the same services that centralized cloud regions do – i doubt many even have GPU instances available). Apps with global or even just US-wide audiences aren’t benefiting from a Local Zone, only users in the immediate metro. Companies aren’t going to want to deploy into 20+ Local Zones just to cover the US when they can use one edge network. Cloudflare has built regional geofencing features in their edge network that can be used with AI services on Workers.

CEO in Q&A: “The larger one, which again doesn’t feel like it’s a bigger deal, but we’re already seeing it play out some of the regulatory efforts that are happening around the world is that a lot of times for these inference tasks, the data that there is very private. People and governments want that to stay as close to the actual end user as possible. So we’ve already seen action in Italy that has restricted the use of certain AI tools because it sends data out of the country. What Cloudflare can uniquely do because we’re positioned across more than 250 cities worldwide, we are in the vast majority of countries worldwide is that we can actually process that information locally.”

-muji

31 Likes

Thanks, Muji, but while inference is much less processing and data intensive than training, inference still has significant needs for both, and much more than I expect typical Worker jobs have required thus far. How well equipped is Cloudflare’s infrastructure, originally setup for CDNs, then expanded for Workers, to handle the higher processing requirements of AI inference? I’m thinking specifically of the GPUs or FPGAs usually used for inference that weren’t needed for CDNs nor for typical Worker jobs. Won’t all the Edge locations need to be upgraded, and then how does that scale?

And, where does the data for inference live when run on the edge? Does every edge compute center need to have its own copy of the data against which inference is run? If so, is that data handled like a CDN, where there’s a central origin server and the data gets sent to the edge on first request and then updated as needed?

And finally, as I mentioned earlier, how many inference jobs that aren’t run on the end device need the reduced latency time, and even then one has to compare the reduced latency time against the slower compute/data access at the edge versus on a central server. For instance, if you save 40ms on latency but lose 60ms due to slower compute, you’re actually faster overall with the larger central server.

In any case, supporting AI inference workflows at the edge has/will require Cloudflare to beef up their edge infrastructure accordingly, and customers will likely want to run sample tests to understand performance before deciding. I’m not yet convinced this is as big a win for Cloudflare as Prince is saying it will be.

16 Likes