Trading MDB for ZS

I have relatively modest positions in MDB and ZS as part of a (presently) 11 stock portfolio.

I really like the products of both companies. My company uses MDB and we really like it. We don’t use ZS but we use a competing product, and it really sucks. I’ve seen demos online of ZS and I think it is a compelling and differentiated product from what we use.

I normally wouldn’t be holding MDB due to its relatively modest growth rate. However, I am a big believer that Atlas will increase their growth trajectory over time. One situation that I’ve seen play out multiples times is that an increasing growth trajectory usually leads to valuation increases, and thus a “double benefit” to share price appreciation. So that is has been the nature of my bet on MDB to date.

I’ve held MDB since 3/11 and I’ve seen about a 10% share price appreciation so far, which has lagged most of my portfolio since that time. I am not unsatisfied, but always looking at optimizing.

A deep dive on the numbers suggests to me that I should sell MDB and buy ZS. Here are the supporting numbers that I consider:

ZS financials (recently priced at 237.83) —

2020 Revenue:
Q1-94 Q2-101 Q3-111 Q4-126
2021 Revenue:
Q1-143 Q2-157 Q3-176

6-month Y/o/Y growth: 57%
Most recent QoQ growth: 12%
Most recent Beat: 8.2%
Most recent full year guidance Raise: 38.2%
Most recent gross profit: 137
P/S current Q: 46
P/S mid-guidance: 37
P/GP current Q: 59

MDB financials (recently priced at 358.45) —
FY2020 Revenue:
FY2021 Revenue:
Q1-130 Q2-138 Q3-151 Q4-171
FY2022 Revenue:

6-month Y/o/Y growth: 39%
Most recent QoQ growth: 6%
Most recent Beat: 8%
Most recent full year guidance raise: 13%
Most recent gross profit: 127
P/S current Q: 30
P/S mid-guidance: 28
P/GP current Q: 46


MDB is a very fine company, but slower growing than ZS for now, and correctly at a lower valuation. ZS, also a great business, but it is growing faster, and as such its valuation is actually converging with MDB if prices stay the same.

If current trends continue, ZS’s growth will erase the valuation gap unless its shares rise faster than MDBs. Therefore, I should sell MDB and buy ZS unless I can think of catalysts that would alter the current trend.

As I do not think the Atlas catalyst will fundamentally alter the situation over the next two quarters, and if anything enterprise security is more strongly favored in the current environment, I should sell MDB and buy ZS.

I appreciate any thoughts or insights from the board on this question. Equally interested in confirming or conflicting guidance.




Hi Rob,

I can’t fault your logic. I’ve held both since 2018. MDB has grown an average (across multiple entry points) 209% since then. Whereas ZS has grown over 256% in that same timeframe. But then I look at CRWD and NET. CRWD has grown 336% since Feb 2020 (so 19 months) and NET almost 50% in just 6 months!!!

MDB 209% 38 months
ZS 257% 33 months
CRWD 300% 19 months
NET 50% 6 months

That trend implies that NET (based entirely on stock performance numbers) is the one to hold. I too, really like MDB, but for similar reasons, I’ve also been considering dumping it for something else.

My highest conviction companies right now are (in no particular order) TTD, SNOW, UPST, DDOG, FUBO, and MGNI. But I still like MDB, OKTA, and CRWD though. Especially CRWD. But if I’m being completely honest, MDB and OKTA don’t actually fit in with my investing philosophy, which is to stay away from “infrastructure” companies. And seeing how long I’ve held each, and their relative performance in my portfolio, it’s time I admit they don’t belong.

Okay, so, this probably doesn’t help you much, but it sure was helpful to me to talk through my own current holdings! :slight_smile:


Paul - no longer very long on MDB!


I haven’t owned ZS for a while (I used to, and I still track it, and keep an eye on them), but I do continue to own MDB as one of my bigger holdings.

MongoDB is generally the company I own that I consider the most expensive, valuation wise, (at least partially because I don’t own SHOP, SNOW or NET). Yet I don’t even an inkling to sell any MDB based on valuation.

For the majority of the companies I own, I feel pretty confident that they’ll still be growing at a high rate for the next 3-5 years, but my conviction is more cloudy beyond five years.

On the other hand, MongoDB is a company that I expect will still be growing at a high clip 8-10 years from now. It’s not going to grow at the 80%+ rate that some companies we follow do, but I can potentially see them staying at or above 35%-40% for most of the next five years, followed by 25-30% for the next five.

Now that’s a lot of growth for a very long time, in a software industry where new competitors show up all the time, but if they grow like that for 10 years, Mongo will be about a $10 Billion annual revenue company ten years from now (potentially still with 70% margins and recurring revenue!).

Of course a lot can happen in ten years and maybe it doesn’t play out like that. But if any of the companies I own today are going to grow that consistently for a decade, I bet it will be MDB.

And it’s still in the early days of the shift from traditional legacy SQL relational databases, to non-relational (NoSQL) databases, but that is certainly where the market has been headed, and I bet it continues that way for well over the next 10 years, and MongoDB is the leader in NoSQL and continuing to get stronger.

This site believes NoSQL will grow from $2 billion in 2018 to $22 Billion in 2026, more than +31% annual average growth

and this one says it will grow from $3 Billion in 2019 to $25 Billion in 2027, also greater than +30%/year growth…

If MongoDB doesn’t at least match the growth of the overall NoSQL database market, then I’d be very surprised. I’m betting they’re going to keep growing faster than the overall market. They won’t just be expanding within NoSQL, but they’ll also be converting companies that had previously used relational SQL db’s as they upgrade and ensure they are storing the mountains of data that are generated every day in the best way possible.

Most of the companies that are already MDB’s largest customers only use MongoDB for a very small percentage of their db business and still have the large majority of their databases as old school relational ones and they want to shift more and more to MDB. So that’s a ton of high likelihood growth that will be pretty easy, before I even start to think about Mongo’s new customers that are out there that haven’t started doing business with MDB at all yet.

MDB may not double or triple over the next year, but I’m pretty confident they’ll be a stable, market beating, profitable investment in my portfolio for years to come. That’s not to say that Zscaler won’t do better, they might. I’m not following them quite as closely. But for me at least, I expect MDB will be a core holding for years to come unless something in the story changes pretty dramatically.



I am contemplating the same thing. Something needs to go if I buy ZS, and looking at MDB. Came across this article:

A little techie for me to wrap my head around. I believe what he is saying (feel free to correct me) is MDB has lots of new innovations and big ambitions. They are moving into the OLTP market (online transaction processing.) He says IF they can pull it off, it is a 73B TAM. Snowflake is OLAP market (Online analytical processing.) FYI. Have no real understanding of the distinctions. The author owns 12% SNOW, and no MDB, although the likes their prospects.

I might have to put MDB in the too hard to understand pile for now, and opt for ZS. I will see how their earnings go…

I am contemplating the same thing. Something needs to go if I buy ZS, and looking at MDB.

I know we don’t get into portfolio management here. I see that as more of a ban on “stocks vs. bonds”-type discussions, and I’m always interested in how seasoned SaulFools allocate funds among their stocks. Some refine the “confidence continuum” into bins to some degree, and it’s clearly a fluid thing over time, based on the latest news of the day/week for many. The “something needs to go” sentiment, though, appears to indicate a hard cap on the number of stocks you’ll allow in your port. Is that crucial?

As an aside, I haven’t posted a stock port update because (a) it doesn’t change very often, (b) I don’t have anything to add beyond what others say, and (c) almost 40% is in SHOP due to its very ‘rapid’ [in the last 5 years sense, not the Saul sense] growth, which is kind of embarrassing. I know I SAY I want every stock I own to grow to be 95% of my portfolio, but that doesn’t mean it’s a great thing for it to happen. Trimming that back to 30% would free up funds for other stocks. Maybe even ZS.

I might have to put MDB in the too hard to understand pile for now

FWIW, I jettisoned MDB last fall partly for this reason, somewhat to the detriment of my returns. I think we’re all more comfortable investing in what we know (a foundation of Lynchian investing), and it makes sense to acquire enough knowledge about an industry we want to make money investing in so we can feel educated enough to make good decisions. If I was just a mutual fund person I’d have no clue how point-of-sale tech works, what a demand-side ad firm does, why AI makes lending better, or what makes one security SaaS provider better than another. But there’s a limit to how much I can take in, given my non-software background (I labored to pass a required FORTRAN class in college); knowing how the company makes money is one thing, but being able to get the nuances of threats and opportunities to a company like MDB is a heavy lift when a passage like,

multi-cloud is the achilles heel of any proprietary hyperscaler solution. A custom-built data interface and storage engine creates a unique set of interactions and patterns for a developer to learn. Multiplied across several cloud vendors, that represents a lot of mental overhead. Any large enterprise has to plan for multi-cloud, even if they primarily occupy one cloud vendor.” [from the linked blog post at]

has several important words that I don’t use the way they’re written, or at least have a hard time really internalizing. Reading that linked blog post makes me feel how I imagine low-key dyslexic people do. Talk about ‘mental overhead’. I tried to get MDB. Really, I did. In these cases I end up chalking the experience up to ‘the cost of not knowing about that stuff’, and move on, at peace with the knowledge that the people who take the time to learn more of the details will probably end up richer than I will. : )

-n8 (ya don’t have to be smart to be a good Saul-style investor, but it dang sure don’t hurt)


And it’s still in the early days of the shift from traditional legacy SQL relational databases, to non-relational (NoSQL) databases, but that is certainly where the market has been headed, and I bet it continues that way for well over the next 10 years, and MongoDB is the leader in NoSQL and continuing to get stronger.

The rise in NoSQL can be tied to the rise in IoT and the general massive increase gathering of lots of data from users. Contrary to what mekong states, I believe the shift to NoSQL is old hat today, and has created more problems to today’s important analytics use cases. People jumped onto NoSQL because it was expedient in terms of gathering the data, but it has created problems in using that data.

Most of the companies that are already MDB’s largest customers only use MongoDB for a very small percentage of their db business and still have the large majority of their databases as old school relational ones and they want to shift more and more to MDB. So that’s a ton of high likelihood growth that will be pretty easy, before I even start to think about Mongo’s new customers that are out there that haven’t started doing business with MDB at all yet.

We’re at a big juncture in DBs right now. Database technology is, surprisingly to me at least, fast moving these days. OTOH, Companies using databases are often slow adopters. I remember the days when companies would skip one or two SAP versions since a new version came out each year and they weren’t capable of keeping up. Often, companies would postpone upgrading until a feature they really wanted was included or the supplier forced them to upgrade by discontinuing support for the older version. So, there’s been a historical mismatch of how quickly new capabilities hit the market versus how quickly companies adopt those new products.

Here’s a brief SQL/NoSQL primer: SQL is like a table with well defined columns and each chunk of data you gather is a row in that table. I’m sure this isn’t the way Amazon does it, but think of a table of Amazon customers, where each row represents a customer and the columns in the table are for things like email address, delivery address, credit card #, Prime or not, etc.

NoSQL is free-form, really just a bunch of “key-value pairs.” Think of a text document where you place a word on the left, then a colon (or other delimiter), then some text (including numbers) representing the value on the right. You can use any word (the “key”) you want on the left hand side. An SQL row for a customer Amazon User Table might then be represented as a NoSQL “document” like:

CustomerAddress: 123 Main St. AnyTown, MA 01234
CreditCard#: 1234 5678 90123
Prime: Yes

Because this is just a document, you can actually add new keys, and/or you can omit some keys, too. So, you will have more information on some users than others. This is NoSQL.

NoSQL is more flexible: when Amazon adds WholeFoods as a sub-business, they simply start putting in WholeFoods related keys for users that invoke that service. It’s literally like adding new text to documents.

With SQL, Amazon would have to modify the table (with an SQL ALTER command which can take a long time to process and so sometimes they actually create a new table and copy the data over), AND they would have to figure out what to put for values in the new column for customers that haven’t yet used WholeFoods since every row in that table has that new column and needs to have some entry. And not only that, you have to define the format of the type of data in that column, which has to be the same for all cells in that column.

So, why hasn’t everyone switched over to NoSQL? Because while NoSQL makes it super-easy to add new keys, it makes it harder to deal with that data later (those “analytics” workflows). SQL guarantees that every row in the table has the same number and type of data fields - NoSQL has no such guarantee. With NoSQL you have to handle missing fields, extra fields, and - worse of all - there is the potential that the values for a given field are in different formats. So, when you go to make use of the data in a NoSQL database, you not only have to handle missing data, you have to handle any variations in the data values. Heck, if you’ve got multiple applications putting data into the database, you might find that they use slightly different keys, and you’ve got to handle that on the processing side as well. The “processing” side of analyzing data in an NoSQL database is often a time-consuming, expensive process.

End Primer

When you think about it, SQL forces you to think up front about how your data is going to be stored, which results in ease of dealing with that data later. NoSQL lets you do what you want to capture the data, and then you have to process that data to get it into shape for analytics. In many organizations today, data is captured from various sources: not just web sites and mobile apps, but small IoT devices and from other B2B services (think of a 3rd party seller on Amazon having an automated system to capture orders and so getting data from Amazon, but also selling stuff on Shopify or Etsy). There’s no way to enforce that all these sources, some of which you don’t own, will use the same “schema” (a fancy way of saying which columns) for your SQL tables.

With NoSQL, you just capture the data and deal with it as it comes in. If you’re doing “transactional” type work (for instance, processing orders), then you deal with each input source to send the order out and then store the data for later. With SQL, you’d have to write applications that translate each order source into your format to be able to store it, and if a source changes you have to change, and you might have to add/subtract columns (changing the “schema”). So, NoSQL is a clear win for transactional (OLTP) use.

OTOH, more and more companies have the desire/need to leverage all the data they’ve gathered to create more profits. Knowing what customers have ordered in the past can help you decide not only what marketing to send their way, but also to help you with inventory prediction, or deciding which new products to design and manufacture, etc. This is done through “analytics,” which simply is analyzing the data you have. Here things are the reverse: NoSQL’s free-form input creates headaches in trying to get data from various sources to represent the same things while SQL’s rigidity means you just analyze what you have. (Yes, I’m oversimplifying, but this is basically it). This is the OLAP use case. (I hate both those terms since they’re just one inside letter apart).

In terms of competition, let’s talk Snowflake. Snowflake is mostly an SQL database, so it has all of the analytical use case advantages, but it also smooths over many of the input/transactional issues with SQL. For instance, one obvious solution people tried with SQL databases when they had additional information coming in was simply to create additional columns for the information coming in different formats or representing slightly different things. This works, but then you have a space problems as SQL databases have rigid space requirements since you have to be able to insert or re-order rows (meaning each row has to be the same size regardless of whether you use all columns or not). Snowflake solves this with internal compression that you as a Snowflake customer don’t see (and it doesn’t impact performance that you can tell).

Snowflake also lets you store NoSQL data within a “document” that is inside an SQL table cell. And you can directly reference data in that document using a “dot” notation, which essentially means you can write your efficient SQL analytical workflows that access both SQL and NoSQL data. Tastes great; less filling. You can also convert your NoSQL data into SQL data (“NoSQL flattening”) if you want. Note that internally, Snowflake is doing all kinds of conversions and processing to enable fast access to both the SQL and NoSQL data. This is all behind the scenes - which means that as Snowflake develops better algorithms you as a Snowflake customer just see better performance. Any application you’ve written does not need to be changed. Contrast that to MongoDB, where any change to the data storage or access requires a rewrite on your part.

Another major technological shift that’s been going on for a while is from on-Premise servers (machines in your own machine room) to Cloud servers (AWS, Azure, etc.) and then to hosted services on Clouds, and now to full-on Database as a Service (DaaS I guess) that combines both data storage and data processing compute. Here are some of the differences:

• OnPrem: You buy machines install and configure software, run your own connections, upgrade manually, run and manage backups, and perform all hardware and software maintenance.
• Cloud: You use machines configured by Amazon or Google (or Oracle, etc), and you install software on them. You may or may not need to perform upgrades and backups manually, but you have to at least configure backups manually.
• Hosted: The software is installed and upgraded and backed up for you, but you still choose parameters that affect performance since that also affects cost. More importantly, your applications are running in some other hosted Cloud instance and so there’s communication and scaling costs.
• DaaS: You don’t worry about anything except your data - the hosting is ALL handled for you transparently. Even more, however, most/all of your applications can run inside Snowflake itself. You don’t need a separate compute hosted service that gathers data from your hosted DB and then processes it, the processing is done inside of the database itself.

MongoDB started out as open-source OnPrem software. They then enabled it to run on the Cloud, but you still have to setup your own clusters and backups and such. Their latest product is Mongo Atlas, which is a “fully-managed” service, which is great, albeit Mongo was slow to recognize the need for it. See some of the differences on Mongo’s AWS page:…

But, even a fully-hosted database is not what I call DaaS. With DaaS you not only don’t have to worry about the mechanics of storing, maintaining, and backing up your data, you also get performance advantages since you often don’t seen a separate Cloud instance to process that data. Imagine you want to find out what percentage of Amazon customers in California have Prime AND use WholeFoods delivery services.

With Mongo Atlas, you write a program that runs in an AWS instance that queries your Atlas database (another instance) and gets each record, looking to see if that user is in CA and then if he/she has Prime and then if he/she has used WholeFoods. If so, you add to a variable. With Snowflake you write a program (perhaps in SnowPark) that runs inside of Snowflake and returns you the result. The difference is in data flow. With Mongo, you’re not only creating a new instance to process this, you’re paying Amazon for the data query AND waiting for the interprocess communication to happen. (This is an over-simplified example in many ways, and this hides the NoSQL pre-processing that might need to happen but also perhaps overlooks some Mongo built-ins for simple operations, but the general point remains).

You can read about SnowPark here:…

Snowpark takes care of pushing all of your logic to Snowflake, so it runs right next to your data. To host that code, we’ve built a secure, sandboxed JVM right into Snowflake’s warehouses—more on that in a bit.
But wait; there’s more! Let’s say that you wanted to apply your PII detection logic to all of the string columns in a table. With SQL, you’d have to hand-code a query for each table—or write code to generate the query. With Snowpark, you can easily write a generic routine… and with this generic routine in hand, you can mask all of the PII in any table with ease…Snowpark takes care of dynamically generating the correct query in a robust, schema-driven way.

OK, this has gotten pretty involved technically, but I hope I’ve expressed it in terms laypeople can understand.

So, what’s my point? It’s that MongoDB is doomed. Certainly not this year or next, but in the long run it’s doomed. I freely admit that decline will be gradual. Remember how I started this post talking about how companies skip versions? They’re even slower to adopt newer/better technologies. But, more and more companies are realizing that just being able to perform transactions within their DB isn’t good enough business-wise. They need to be able to gather intelligence and take action based on the data they’ve collection (analytics). If they don’t, they’ll simply be out-competed in the marketplace.

My last job involved setting up a two-way IoT system for vehicles. The proposal was for over 3 database: Amazon S3 (Simple Storage System) to store all the data coming in since it is easy and cheap (and NoSQL). Amazon Redshift (competitor to Snowflake) for analytics, which meant moving data we wanted to analyze from S3 to Redshift. And a Mongodb database for transactional processing to support our mobile app. With Snowflake’s separating the cost of storage from compute, today I’d push to store it all in Snowflake for storage and analytics, and I think SnowPark could handle supporting the mobile apps as well. Any cost saving from S3 or even Mongo would be minimal compared to programming ease and performance gains.

Investing-wise, I clearly got out of MDB too soon, and even now may even be “too soon” business-wide but I think that’s better than too late. With every company that has data needing to analyze that data, I don’t see how the superior Snowflake model doesn’t win in the long run.


I’ll put an annoyingly blunt point on it.
SQL databases fit logically perfectly with spreadsheets, and Microsoft’s developed pivot and dashboarding tools on top of them, PowerBI. In terms of design, relational databases are closely coupled with the design of spreadsheets - lookups, keys, value filtering, and data organization.

NoSQL databases require JSON coding with rules and disassembly and development of bespoke data analysis and metrics to extract meaningful information from them. (Probably dated opinion, I’m sure better tools have evolved in the last 3-5 years).

The key is that the accomplished business power user can gain meaningful insights from a relational database downloaded to Excel / Sheets / (yes even) Numbers including basic machine learning (or at least basic statistical / correlation analysis) and insights.

Not as easy for that business power user to self serve on MDB or the other NoSQL tools. The learning curve may be relatively easy (steep), but there’s a Mt. Everest of inertia out there to overcome.


1 Like