Nomura MDB ~ Extreme Valuation…

Raises PT to $63 from $43



A “downgrade” alongside a nearly 50% price target increase. This right here is why our time horizon…and ability to play by our own set of rules gives individual investors an advantage.


A “downgrade” alongside a nearly 50% price target increase.
But that increased price target is still below current market prices. Meaning the analyst is still expecting the stock to lose value over the next year or so. What do people think? Is MongoDB priced to perfection? Is the competition risk being downplayed here?


I have no idea what will happen in the short term…or the long term. But I think MongoDB could be… 10 times…or even 30 times larger than it is today in terms of Market Cap. I would have to assume valuation would come down over time, it most definitely would not be growing as fast.

So in terms of stock price… I don’t know, that has to be good for a 500% - 1000% return.

What numbers am I basing this off of? Well, just to get a general guestimation of future potential I used Oracle’s current market cap and revenue vs Mongo’s

Market Cap 195B
TTM Revenue 40B
Revenue Quarterly YoY growth: 1%
Gross Profit Margin 79%

Mongo DB
Market Cap: 4B
TTM Revenue: 192M
Revenue Quarterly yoY growth: 61%
Gross Profit Margin: 71%

I don’t know how to actually estimate a potential future value for Mongo DB. Atlas, their newest product seems to be catching on and growing rapidly, but I strongly believe their biggest product in 5-10 years doesn’t even exist yet.

We can’t even imagine what it might be. But I’m trusting they will learn, innovate, disrupt, and repeat for many years as they grow.

Or…it could go to 0 one day. I’m betting on something more in line with the bull case.

Just my opinion.


“Atlas, their newest product seems to be catching on and growing rapidly”

It is not a product, it is a delivery method. It is their existing database product, hosted by Mongo. It cannibalizes existing product sales (though with better economics).


Good call out ajm. It even felt weird typing “product”. Sorry!

1 Like

This is the part in the analyst note that I found interesting

“MongoDB has a “compelling” multi-year, if not multi-decade, opportunity, but it faces “intense competition and deep-pocketed incumbency that won’t die easily,” the analyst contends. He points out that in the last year alone, Microsoft’s (MSFT) Cosmos DB grew from zero to $100M revenue…”

We have heard MDB CEO himself state that they have seen no competition in NoSQL. The analyst seems to imply Cosmos is a competitor. Wonder what those with a better understanding of the tech. think?

I suppose by incumbency he is saying SQL won’t die off. But does that mean NoSQL will not grow? While researching I came across this thought it was interesting


That’s fine, it’s borderline nitpicking on my part. Atlas adoption is actually a good sign for them. Mongo deployments are easy to get wrong, and the level of expertise to administer one sustainably isn’t compatible with the number of mongo dbas available.

The problem with Atlas or any DBaaS is the same as any PaaS which is worse overall margins and reaching the point where economies of scale kick in/hitting cashflow breakeven. You can’t pull those 80+ margins if you’re hosting, and it requires a dedicated team (devops, security, SREs) that must be amortized over multiple clients. Lastly there are some fields where it’s going to be somewhere between a huge barrier to sales and a complete nonstarter (hipaa/pii, certain financial data, classified/defense data, certain legal regimes/gdpr, etc) so it’s only a subset of the overall TAM.

That said they can charge more, it’s stickier, it’s more efficient from an R&D and support perspective (smaller number of config/version permutations), and it reduces churn. The current DBaaS TAM is probably going to converge with the overall TAM, too, as more people get comfortable with it.


In the life cycle of stocks they will go up, they will go down. Analysts will be positive, analysts will be negative.

Nvidia is overvalued, GPUs subject to disruption look at FPGA and Microsoft, Cisco on war path against Arista, pick your poison ?? they are all there.

What do you think the buy out value is for Mongo? More than today for sure.

Competition? When a choice is made to move to a move on from SQL MDB has little competition these days in the enterprise market other than from other SQLs.

The analyst obviously does not understand that Mongonis not taking on Oracle, nor Cosmo and that Mongo marketshare is so small just moving to 5 or 10% of the market is enormous. $64 billion market, $235 mil in revs this year.

Morgan Stanley had a sell at $99 price tag on Nvidia for a long time and then upped it to mow $100s before conceding as it hit $225.

Such are analyst. Does his opinion comport w reality and actual market? No. He clearly does not understand where Mongo plays in the market and why they largely have no competition. In their market in regard to enterprises who not only use Mongo but likely spend 10x more on Oracle at the same time.



Christopher Eberle rated at 3,499 out of 4,881 given 1 and a quarter stars out of 5 by Tipranks. 50% success rate and 2.5% average return per rating. His all time best call was Oracle last September with a blistering 5.6% gain to date (

So there’s that.

What follows is longish essay with a lot of historical content. Feel free to jump to the last three paragraphs if you want to skip most of it.

It might be enlightening to review a bit of the history of data handling by computers. Very briefly, when it comes to business data everything was initially derived from the paper forms that originally constituted business transactions and record keeping. the paper forms had “blanks” to be filled in which specified the variable data for the transaction or record.

The paper forms were more or less copied as computer records. The records were kept in a file of similar records, initially stored on magnetic tape. Records were distinguished from on another via different methods, the two most common were “fixed length” each record being a specific number of bytes. The other method used for more complex variable length records was to use an EOR (end of record) marker at the end of each record. There was also an EOF (end of file) marker that told the computer to stop looking for the next record.

In 1968 IBM released IMS DB/TP (Information Management System Database/Teleprossesor). It was not the first database. Nomad, Atlas (sound familiar), Culinet and several other competing products came out in the same time frame. But it was an IBM product (as the saying went, no one was ever fired for buying “Big Blue” in reference to the color of IBM mainframe computers). And the TP part of the product was actually pretty unique and very powerful for the time. IMS used a hierarchical data scheme, for example, a purchase order had a header which carried the stuff that was common to the children. A child might be the information about a purchased item. There could be several children. A child in turn might have its own children, for example delivery schedules for each item, and so on. I don’t recall if there was a limit to the depth of the hierarchy (I was not a DBA), but I do remember that 3 or 4 levels was about as deep as one ever wanted to go for the sake of processing efficiency. IMS DB was an overlay on top of VSAM (virtual sequential access method) files, which was essentially the progression from sequential tape files to random access disc drives. VSAM pretty much preserved the notion of sequential records in a file.

Meanwhile, in one of IBM’s research labs an English mathematician named E. F. Codd advanced the relational theory of data management in 1969. This was received with ambivalence by IBM executives. First because it was based on pretty abstract math, but more importantly they feared that such a product would cannibalize IMS sales. Something they were loath to do. But Codd’s work was published and he eventually got frustrated and left IBM and joined up with Chris Date and another fellow who’s name I don’t recall. They started their own company, but it was not very successful. However, Larry Ellison picked up on the work and created Relation Software, Inc. which came out with a product named Oracle in 1979 which used the SQL language (as the Codd/Date language was the copyrighted SEQUEL language). for all his faults and arrogance, Ellison was a great salesman. It wasn’t until 1983 that IBM came out with its own relational database, cleverly named DB2 as Oracle became an apparent threat to IMS sales (at the time Oracle ran on UNIX and IMS ran on MVS, but the handwriting was on the wall. Ellison had announced his intention of releasing an MVS version of Oracle).

The underlying principle of the relational model solved a problem not much of anyone cared about. The problem was data redundancy which led to integrity problems. Starting with paper forms all the way through IMS, the “same” information got recorded redundantly. Every purchase order on paper, in a sequential record in an IMS header segment named the supplier (and a bunch of other stuff) redundantly. If you had 50 POs with a vendor, the vendor’s name appeared as new information on all 50. And because it got re-entered on every record, variations were bound to arise. The relational model through a process called normalization provided a system of data management that stored the commonly reused stuff together as “attributes” (data fields) in a single “entity” (record). A 3rd normal form model provided a design such that this stuff would only be entered and updated one time in one place and then “re-used” everywhere it was needed. They whole model is linked together by a system of keys.

So while a 3rd normal form database helped to eliminate data integrity problems, precious few DBAs actually implemented 3rd normal form databases, in fact, most DBAs of the day didn’t know what a 3rd normal form data model was, much less how to construct one using SQL. Databases continued to be designed pretty much as sequential records or hierarchical records with completely redundant data copied into highly denormalized output tables to support reporting functions. Most folks in the user community were highly tolerant of data variations, they had been living with them ever since record keeping began, so why would it be a problem now? When it made a difference, for example components in a bill of material, there were people who double checked them on input and again on output. How do you like your new job junior engineer fresh out of college? Not everything you might have imagined, is it?

But in the 1970s a guy named Bill Inmon introduced the concept of the data warehouse. He posited that there was a lot of business intelligence locked up in the data. If it were all pooled together a business could ferret out all kinds of important information. Which products were selling best? Where were the selling? When were they selling? To what demographic were they selling? Which products had a lot of warranty claims? What was failing? On and on, all manner of important business questions might be answered. And the relational structure was the most amenable storage method for addressing these questions, but lo, data quality suddenly was important. While humans are extremely tolerant of variation and anomalies, computers are not.

A company named Teradata (later acquired by NCR) was founded in 1979. Teradata was dedicated to business intelligence and data warehousing. In 1984 they came out with specialised data warehousing computer that hosted their own relational database, it was optimised for 3rd normal form database design. Teradata was not designed to run transactional applications, the data was loaded into the warehouse via specialised ETL tools (extract, transform, load) from the transactional systems. Once loaded BI tools (business intelligence) were used to perform analysis and reporting. Numerous companies offered different flavors of ETL and BI tools. Some of the favorites discussed here fall largely into these categories. Talend is, among other things an ETL tool. Alteryx is essentially a BI tool.

So how is all of this relevant to MDB? What you might have noticed in this discussion is a couple of things. First, almost all the data management strategies mentioned were focused on handling forms like transactional business data. There was always the problem of messy data that didn’t neatly fit in a predefined field. The old sequential records would often provide a fixed length notes field, you could enter plain text. IMS had a more sophisticated unkeyed child segment for indefinite length text. Relational databases eventually provided for CLOB and BLOB data (character/binary large objects). But to be honest, the stuff was a headache. You could store it, but you couldn’t manage or manipulate it.

The other thing you might have noticed was the staying power of Oracle. They had tons of competition. I only mentioned DB2, but the list of vendors and relational database products was long. Few of them had much market success. Microsoft’s SQL server evolved from a partnership with the Aston Tate company and their product called Sybase. The history is somewhat convoluted and it’s not correct to say that SQL Server was simply a renamed Sybase, but the fact is that SQL Server, DB2 and MySQL are probably the only significant competitors for Oracle. I don’t have a source, but I’m pretty confident that Oracle remains the number one DBMS with respect to deployments.

Relational DBMSs will be around as long as transactional forms-like data is around, there’s a good bet that will be many, many years to come. But the non-relational data types (IMO the term “unstructured” is inappropriate, an MP3 or JPG file is highly structured, it’s just not amenable to being managed via SQL) which have been around for a long time have gained in significance. And most importantly, they have gained in analytical importance. They are no longer opaque files on a server.

The lesson to be learned from Oracle is that the company that is the leader is highly likely to remain the leader so long as the management is smart enough to continue to address the demands of the market. Larry Ellison drove Oracle like a race horse. The DBAs where I worked had a love/hate relationship with Oracle. Standard practice was to never (or very cautiously) implement the new features in the latest release as they were prone to be very buggy. But, most of the bugs from the previous release would be addressed. So long as we remained one release behind with respect to implementation things usually worked. I say this with confidence as I was the manager of the group that set application development and database standards for the IT organization.

I of course was far to uninformed about investing to take advantage of my own knowledge with Oracle, Microsoft and other opportunities that were all around me every day. I think I’m smarter now with respect to investing.

I’m long MongoDB. I think the MDB runway stretches far into the future. I think Wall Street analysts and Mr. Eberle in particular has no imagination and no appreciation for where this company is potentially headed. The one reservation is if the management that leads the company can stay out in front. When they assert that they really have no competition, it makes me uncomfortable.