Earnings Call Transcript
Reflections on the earnings call
- An excellent example of a company successfully utilizing AI to enhance existing (non-AI) capabilities.
- Software Observability is clearly very important to AI-native companies.
- TAM continues to grow both from companies continuing to move to the cloud and from new AI-native companies.
- Guidance for the upcoming year seems overly conservative. Which is appropriate at a time when AI and LLM capabilities are rapidly evolving.
DDOG represents 5.58% of my portfolio.
See also my previous post: DataDog (DDOG): Reviving an old favorite
Earnings Call Highlights
A few select comments from the earnings call transcript which I found notable. Many other interesting comments.
Comments on growth …
Olivier Pomel, Co-Founder, CEO & Director
We saw a continued acceleration of our revenue growth. This acceleration was driven in large part by the inflection of our broad-based business outside of the AI-native group of customers we discussed in the past. And we also continue to see very high growth within this AI-native customer group as they go into production and grow in users, tokens and new products.
[…]
Finally, churn has remained low, with gross revenue retention stable in the mid- to high 90s, highlighting the mission-critical nature of our platform for our customers.
Comments on R&D Efforts …
Summary: R&D centered around either utilizing or supporting AI. Even their “summary” is extensive. A few highlights:
Olivier Pomel, Co-Founder, CEO & Director
Moving on to R&D and what we built in 2025. We released over 400 new features and capabilities this year. That’s too much for us to cover today, but let’s go over just some of our innovations. We are executing relentlessly on our very ambitious AI road map, and I will split our AI efforts into 2 buckets: AI for Datadog and Datadog for AI.
[…]
So first, let’s look at AI for Datadog. These are AI products and capabilities that make the Datadog platform better and more useful for customers.
[…]
Second, let’s talk about Datadog for AI. This includes capabilities that deliver end-to-end observability and security across the AI stack. We are seeing an acceleration in growth for LLM Observability.
[…]
We are working with design partners on GPU monitoring, and we are seeing GPU usage increase in our customer base overall. And we are building into our products the ability to secure the AI stack against prompt injection attacks, model hijacking and data poisoning among many other risks.
[…]
Overall, we continue to see increased interest among our customers in next-gen AI. Today, about 5,500 customers use one or more Datadog AI integrations to send us data about their machine learning, AI and LLM usage.
Comments on future growth …
Olivier Pomel, Co-Founder, CEO & Director
I want to say a few words on our longer-term outlook. There is no change to our overall view that digital transformation and cloud migration are long-term secular growth drivers for our business. So we continue to extend our platform to solve our customers’ problems from end to end across their software development, production, data stack, user experience and security needs. Meanwhile, we’re moving fast in AI, by integrating AI into the Datadog platform to improve customer value and outcomes and by building products to observe, secure and act across our customers’ AI stacks.
In 2025, we executed very well and delivered for our customers against their most complex mission-critical problems. Our strong financial performance is an output of that effort. And we’re even more excited about 2026 as we are starting to see an inflection in AI usage by our customers into their applications and as our customers begin to adopt our AI innovations, such as our Bits AI SRE Agent.
Comments on use by AI-native customers …
David M. Obstler, Chief Financial Officer
Among our AI customers are the largest companies in this space, as today 14 of the top 20 AI-native companies are Datadog customers.
Q&A: Comment on customers moving to DataDog from their own solutions …
Raimo Lenschow, Barclays Bank PLC, Research Division
[…]the 8-figure deal for a model company is really exciting. I assume they try to do it with some open source tooling, et cetera, but – and actually went from like almost paying not a lot of money to paying you more money. What drove that thinking? What do you think what they saw that kind of convinced them to do that? And it’s now the second one after the other very big model provider. So clearly, that whole debate in the market between, oh, you can do that on the cheap somewhere is not kind of quite valid. Could you speak to that, please?
Olivier Pomel, Co-Founder, CEO & Director
I mean the situation is just very similar to every single customer we land. Every customer we land has some – has had some at homegrown. They have some open source. They might still run some open source, like that’s typically where we see everywhere. It’s cheaper to do it yourself is usually not the case. So, your engineers typically are very well compensated in the big part of the spend in this company. Their Velocity is what gates just about anything else in the business. And so usually, when we come in, when customers start engaging with us, we can very quickly show value that way. So, it’s not any different from what we see with any other customer. And also within the AI cohort, it’s not original at all like – AI cohort in general is who’s who of the companies that are growing very fast and that are shaping the world in AI and they’re all adopting our product for the same reasons, sometimes the different volumes because those companies have different scales, but the logic is the same.
Q&A: Comment on Moat …
Gabriela Borges, Goldman Sachs Group, Inc., Research Division
Congratulations on the quarter. Oli, I wanted to follow up on Sanjit’s question on how to think about where the line is between what an LLM can do longer term and the domain experience that you have in observability? If I think about some of Anthropic’s recent announcements, they’re talking about LLMs as a broader anomaly detection type tool, for example, on the security vulnerability management side. How do you think about the limiting factor to using LLMs as a n anomaly detection tool that could potentially take share from observability over time in the category? And how do you think about the moat that Datadog has that offers customers a better solution relative to where the road map in LLMs can go long term?
Olivier Pomel, Co-Founder, CEO & Director
Yes. So that’s a very good question. We definitely see that LLMs are getting better and better, and we’ll bet on them getting significantly better every few months as we’ve seen over the past couple of years. And as a result, they are very, very good at looking at broad sets of data. So, if you feed a lot of data to an LLM and ask for an analysis, you’re very likely to get something that is very good and that is going to get even better. So, when you think of what we have that is fundamentally our moat here, there’s 2 parts. One is how we are able to assemble that contact, so we can feed it into those intelligence engines. And that’s how we aggregate all the data we get, we parse out the benefits. We understand how everything fits together and we can feed that into the LMM. That’s in part what we do, for example, today, we expose these kinds of functionality behind our MCP server. And so, customers can recombine that in different ways using different intelligence tools. But the other part that we think where the world is going for observability is that right now, we are – the [ SDLC ] is accelerating a lot, but it’s still somewhat slow. And so it’s okay to have incidents and run post-hoc analysis on those incidents and maybe use some outside tooling for them.
Where the world is going is you’re going to have many more changes, many more things. You cannot actually afford to have incidents to look at for everything that’s happening in your system. So, you need to be proactive. You’ll need to run analysis in stream as all the data flows through, you’ll need to run detection and resolution before you actually have outages materialize. And for that, you’ll need to be embedded into the data plane, which is what we run. And you also need to be able to run specialized models that can act on that data as opposed to just taking everything and summarizing everything after the fact 10, 15 minutes later. And that’s what we’re uniquely positioned to do.
We are building that. We’re not quite there yet, but we think that a few years from now, that’s what the world is going to run, and that’s what makes us significantly different in terms of how we can apply anomaly detection, intelligence and preemptive resolution into our systems.