Consumption Model Differences

I just finished listening to the MDB call and it strikes me that many people, even the analysts paid to understand the companies, do not know the difference between the consumption based models, not just among data platforms but across our companies.

Among our companies, the following have a consumption based model (that I am aware of, I’m likely missing some).

  1. Datadog
  2. MongoDB
  3. Snowflake

After the whole FSLY debacle, many people (including myself) were wary of consumption based models. However, I think the main lesson there is that if a company isn’t growing customers very well, stay away!!

For the most part, the Datadog consumption model is based on number of devices (computers, switches) monitored as well as the number of metrics monitored on those devices. For example, a company has many servers running supporting various applications and such. For each server, some of the metrics they are monitoring are things like uptime (is the device running), CPU usage, Memory usage, Disk usage, connectivity issues…
While this is technically a consumption based model, these things do NOT fluctuate very much. Think of it like a company’s Office 365 subscriptions. A company rents Office for each of its employees. When they add/lose employees, it’ll fluctuate but that’s really it. Its very stable, a lot closer to a strict SaaS model.

MongoDB charges based on consumption as well but it is based upon how much hardware is provisioned & how many hours it runs (it might be down to the minute or second). This too is very stable, however, if a month has 28 days in it instead of 31, then revenue for that month will be less by around 10%. One way companies like to save money on this model is to take applications they only need during business hours and shut them down when not needed. This cuts from 24 hrs/day to 8 or 10.

MongoDB is working on a Serverless option. They really have no choice as the hyperscalers already offer this in their competing platforms (both NOSQL & SQL). This will add more variability to the revenue but not as drastic as Snowflake. In general, serverless automatically scales up & down based on the workload being presented. I use an Azure SQL Database Serverless for managing my portfolio & Earnings Reports. It automatically shuts down after an hour of no activity, which saves me a lot of money.

Snowflake bills on the consumption of CPU resources all the way down to the individual query basis. Snowflake said on the call that 70% of queries run on their system are automated & 30% are human generated. They also said the automated are very steady but the human generated ones are not. This means things like extra holidays or a pandemic where more people are sick during a quarter or something can have an impact on revenue because those workloads are not running as they usually do.
I don’t follow this one as closely and I’m a data professional so the previous three are things I understand very well. However, this means my understanding of’s consumption model might be inaccurate so feel free to correct me. generates money from two methods. The first is the straight SaaS model. This is very steady but a smaller set of overall revenue. The real money comes in via per transaction. The more companies using (or thru partners that whitelabel them) the better but its likely better to have companies with say, a lot of travel and other expenses. It is a real flywheel because they can add vendors & customers and make it easier and easier…
Anyway, the variability in this one seems the least stead in my opinion. If travel shuts down or we hit a recession, companies may cut some of their spending and therefore the revenue gets might not grow as well.

Hope this helps clear up some of the confusion.

Add to & correct as necessary.



My understanding of Datadog is that the consumption is based on application logs. This in fact can fluctuate depending on various reasons. For example, a quant trading firm might generate more logs on a higher volatile day. Utility company applications might generate more logs when there are more humid days than normal.


Sorry, I wasn’t trying to be absolute to say its the ONLY way they charge. Trying to simplify it to make it more clear for people to better understand the differences.

Datadog has 13 modules listed in their pricing page. Most are the model I described above, based on the device (or per their pricing page, per host). There are two based on application log sizes though as you describe. The log management module and the cloud SIEM module. I’m not sure how variable those will be. They would likely be more variable but I don’t know that they would be as variable as Snowflake or MDB’s models. I guess if a company usually has major events that fill logs then fixes them to where the logs are not filled as much then you’d see lower usage but… perhaps if there are a lot of random issues flying around month to month… I don’t know. Perhaps someone with more knowledge in this specific area can help.…