Cloudflare @ Morgan Stanley Technology conference

March 6, Cloudflare’s CFO Thomas Seifert presented at the Morgan Stanley Technology conference.

Highlights from the presentation include,

  • First networking cloud connecting all of our customers and everything they do
  • Customers able to move data wherever it needs to go
  • Q4 was a strong quarter, the good news is it was not driven by any one-off or large deals
  • Growth was driven by a broad set of growth vectors
  • Everything we initiated with the go to market is taking hold
  • Reached feature parity with everyone who is out there on Zero Trust
  • During Covid traffic spiked on our network by 60% in a couple weeks. Folks expected our margins to tank, but they actually improved
  • The Zero Trust products are literally moving traffic is the reverse direction of other Cloudflare services. It’s all about moving traffic back and that traffic comes for free.
  • Our Zero Trust products margin-wise are far ahead of 90%
  • We could consolidate all the Zero Trust providers out there, put all that network traffic on our network and not need to invest a single dollar on CapEx to handle it
  • Zero Trust products start selling to CEO and CFO, rather than CIO which was typical with CDN products
  • CEOs have different topics like compliance, data sovereignty and data localization as priority
  • Sales channel important part of go to market strategy and revenue grew by 70% from here
  • New leader in sales Mark Anderson joined
  • Once you have very large deals especially with the Department of Commerce from last quarter, it drives interest from other large channel partners. Other departments see that you can sign a 30-50M deal with Cloudflare
  • The product and platform is compelling
  • Superior margin structure can take advantage of rewarding partner system because the partners can be competitive on pricing
  • Cannot think of AI company which is not behind our network at this point in time
  • Use case they did not expect was GPU capacity shortage driving LLM to put their data in Cloudflare and then use Cloudflare as a departure point to find available GPU capacity for training models
  • We just help them find the affordable GPU capacity without paying for a huge amount of egress, and we transport the data to where the capacity is
  • Literally in every location by the end of this year, enabling inference to run anywhere in the network depending on customer’s compliance, cost, or speed needs
  • Vector database available last year has a high attach rate to everything we sell
  • We just launched a product today for multi-cloud called Magic Cloud or Magic MultiCloud, it’s like an interpreter that understands all public clouds
  • Interpreter comes with our backbone which allows to move data between private and public clouds
  • Customers just pay for what GPU resources they use, every service in every location
  • Matthew (CEO) had the foresight to leave the PCI slot open on servers being installed, now all servers getting equipped with GPU capacity. This GPU is affordable and has high availability
  • Don’t need H100 cards at this point, but still procuring chips from Nvidia but also a more broader set of suppliers now
  • Adoption of Workers and Workers AI is accelerating
  • If you look at download data on Workers it’s a steep curve up
  • One third of developers signing up for Workers AI are net new
  • Variety of use cases coming to the network is huge
  • In Goldilocks zone for inference tasks
  • When we push new technology we push adoption and not revenue
  • We never discourage a byte of data moving through our network even if it’s free

One of the big take aways from this conference for me was that Zero Trust onboards huge amounts of users and companies to their platform, and it has margins above 90%. The multi-cloud and Workers AI products sound compelling to me, and the rate of innovation is impressive. Sounds like the company has hugely revamped their go to market strategy and is ready to take more market share.

62 Likes

Wow. Thanks for summarizing and sharing. The ones that stood out to me:

    • We could consolidate all the Zero Trust providers out there, put all that network traffic on our network and not need to invest a single dollar on CapEx to handle it
    • Cannot think of AI company which is not behind our network at this point in time
    • Use case they did not expect was GPU capacity shortage driving LLM to put their data in Cloudflare and then use Cloudflare as a departure point to find available GPU capacity for training models

Many abandoned this company last year, but to me this has all the appearances of a long-term winner.

42 Likes