PVTL - The Tangled Web We Weave

Being a “trust but verify” type of investor, I have been kicking the tires a bit trying to get some clarity on the Kubernetes threat to PVTL and how SW went from the “war has been won” to a bit more cautious about technology trends and its impact on PVTL.

What seems clear to me is that this is not clear to practically anyone in the tech field based on what I reference further in this post.

The background environment that we are discussing regarding containers/databases includes a complex array of standards setting (kubernetes), strange bedfellows (GOOG and VMware), parent company control (DELL over VMWare and PVTL) and dynamic fluid partnerships that aim to give certain companies strategic advantages over another in more global battles (AWS vs GOOG).

It is a tangled web indeed!

Keep in mind that AWS is the King of the cloud…dwarfing GOOG, Azure and IBM. But they lack software prowess of the kind that MSFT and GOOG have so there are battles between them on levels far greater than just containers…though AWS has accepted Kubernetes as a standard…many believe that AWS will/must beef up its software offerings or they potentially see future erosion from GOOG and MSFT.

First let me state that I thought it was just me that didn’t get these PKS, Kubernetes, etc. tech trends…until I read this really great article that I would highly recommend you read straight through to provide some basic knowledge of where this industry started and where it stands…but let me reference specifically the questions it asks since I have been trying to understand it myself and it may give you some comfort that many experts are also scratching their heads:


Why does Google cloud platform have to run Pivotal Container Service, which requires people to build and operate it, and only AWS and Azure get this new VMware Kubernetes Engine that is operated by VMware? Moreover, why isn’t VMware Kubernetes Engine based on Pivotal Container Service, just a variant that VMware itself supports? How come Google Cloud Platform is not getting VMware Kubernetes Engine , and moreover, how come VMware Kubernetes Engine is not being offered on-premises on clusters running the VMware stack inside enterprise datacenters? And most importantly, why hasn’t VMware created a single substrate that can span public clouds and private clouds and provide a single, consistent, easy way of using Kubernetes, which can be run by enterprises if they choose or by VMware if they choose?

You see…this is a very tangled web…and it is far less clear than it first seemed.

If one considers who is winning this PaaS and Caas game, Kubernetes without question. Check out this 2017 article page 16 to see how they stack up…keep in mind that Cloud Foundry seems to be losing ground:


Page 18 shows how RedHat performs vs VMware…this was the genesis of the GOOG, VMW, PVTL partnership for their container kubernetes PKS solution…going after RedHat:


Meanwhile, PKS has its sights set on Docker, which has its own technology to manage containers. With VMware and Google behind it, Pivotal believes that PKS can grab market share from Docker, Watters said. Finally, VMWare is aiming at Red Hat, its longtime foe. The $17 billion enterprise-software company offers OpenShift, an application-development platform that’s similar in some ways to PKS. With the help of PKS and Google, VMware is hoping it can stem some of OpenShift’s growth.
“The three of us are coming for Red Hat,” Poonen said.

But PKS is predominantly an on-prem solution and many are questioning its success in the market…what happens to PKS over time as more and more companies move off-prem to the cloud?:


VMware and Pivotal this week released an update to their Pivotal Container Service (PKS), but adoption of the enterprise-focused platform remains up for debate. **Neither company broke out specific sales numbers for PKS.**Cowen and Company, which recently released results of a public cloud survey of more than 570 IT and cloud services buyers, noted lukewarm PKS interest so far.
“Our checks have not revealed any particular interest so far and it remains to be seen whether this will meaningfully affect [VMware’s] container positioning in the long run,” said Gregg Moskowitz, managing director and senior research analyst at Cowen & Company.

A recent customer survey conducted by cloud security platform provider Sysdig found that 82 percent of Kubernetes deployments were of the upstream open source version. By comparison, the study found that 14 percent of deployments were using a managed version like Red Hat’s OpenShift or Rancher Labs’ managed version.

And as to PVTL’s particular strength:

Dillingham wrote in an email. “The orchestration platform customers choose to consume that value on is secondary, but since Pivotal is the clear leader of the Cloud Foundry project but not Kubernetes, its preference for maximizing its differentiation will be to focus customers on its PAS offering over PKS.”

The Pivitol Application Service (PAS) you may recall is here:


It is with PAS that the author contends that PVTL will remain its key differentiator…not containers. The key question then is…is the PAS (PVTL’s real strength) enough to keep its momentum and grow revenue into the next decade???

When one goes back to their earnings call, it is clear that PAS is NOT enough for all customers:

Rob Mee
Sure. We’re seeing a lot of adoption from existing customers. A lot of our existing customers are investing in PKS. They have workloads that they want to run – they aren’t necessarily a great fit for our PAS offering. And so, they’re really glad that Pivotal is bringing a Kubernetes offering to market and it runs on the same platform as PAS. And so, there, we have a lot of customers that are jumping in and getting their feet wet with that right now. What we think is a real advantage of it is that it enables a very small team of operators to deploy and update dozens or hundreds of Kubernetes clusters with relative ease and that’s something that’s differentiated. The VMware connection there is very helpful because we are activating their large sales force to help us go to market there; we’re also integrating with their networking capability, NSX T and that’s something that solves one of the biggest challenges of using Kubernetes in a private cloud setting.

We seem to be seeing a true cross selling arrangement with that GOOG/VMW/PVTL partnership for PKS…but what if that product…after the initial cross selling spike is over…what if adoption isn’t as great as they have touted…they do NOT breakdown revenue from PKS. We also know that VMW went rogue with their own kubernetes product without PVTL…so this is not necessarily a together with live or die partnership.

In case your head is not swirling by now, then we have arguments like this one that Cloud Foundry and Kubernetes will be run together with different use cases for each:


Kubernetes is, by design and definition, container-centric in its approach. It is described by one of its founders as “fundamentally a tool box.” Cloud Foundry, by contrast, container abilities like Diego notwihstanding, is application-centric. It is more production line than toolbox, in a sense. These approaches are neither mutually exclusive or inclusive, and indeed can and often are leveraged by different business units within the same organization depending on need.

So what does this all mean??? It means we have a tangled web indeed…it is far from clear where this is going and how PVTL’s predominant PAS expertise thrives in this age of rapid technology accelerations. It would be helpful if PVTL would parse out its revenue by PAS and PKS…doubt it will happen…there may be good reason for it. It would also be nice to hear what the cross selling revenue generation might be from the VMW relationship as compared to fresh organic growth. It would also be nice to know what the on-prem revenue was…especially since the trend in coming years is to the cloud.

Bottom line, if the whole discussion about PVTL has been confusing…it is for good reason! This is a very tangled web!

It speaks to the advice that Denny has given from time to time…that we cannot know what we cannot know…therefore, just follow the money. This company is far from a buy and hold IMO…many many complex issues and skirmishes are simultaneously occurring as noted above including containers/databases standards setting, strange bedfellows, parent company controls/cross selling and the many dynamic fluid partnerships that aim to give certain companies strategic advantages over another in larger global battles.

With such seminal events occurring at such rapid speed, being too confident in a recent IPO like PVTL might seem a bit careless. Hence, follow the money, remain nimble, don’t assume any war is won…ever.

And if you made it this far reading this post…you deserve a medal!



I am not sure what all the fuss is with Pivotal. PKS runs on any public cloud like anything Pivotal does. It is cloud agnostic.

I have other questions I am still reviewing but the primary article you address is mostly focused on VMWare, and I cannot comment there. 500,000customers is not a bad number though. But much of what was described does not fit with what PKS does as it is cloud agnostic and not linked just to Google Cloud.


<<<BOSH advantages: Built-in health checks, scaling, auto-healing and rolling upgrades
Fully automated operations: Fully automated deploy, scale, patch, and upgrade experience
Multi-cloud: Consistent operational experience across multiple clouds
GCP APIs access: The Google Cloud Platform (GCP) Service Broker gives applications access to the Google Cloud APIs, and Google Container Engine (GKE) consistency enables the transfer of workloads from or to GCP>>>

What is does have is native connectors to Google APIs and operating software. But since Kubernetes derives from Google, that is to be expected. Nothing impedes running Kubernetes on Amazon or Microsoft or IBM.

Perhaps I am mis reading as the material seems to indicate that PKS is not cloud agnostic. It is, and so is Kubernetes. However, since Googl created it, Google does add elements in Google cloud to make it easier to run on Google cloud than any other cloud.

So i do not understand this concern in reference to Pivotal?

What did Pivotal add to Kubernetes, good question. Is PKS cloud agnostic however, yes.



Great post. The more I read and talk to people in the industry the more I think that like you said, that nobody knows where the industry is going go with this stuff. Both kubernetes and cloud foundry are trying like mad to take over each other’s use cases. My take aways in no particular order.


  1. Each will continue to have advantages over the other in specific use cases
  2. The TAM is large enough to have PVTL grow quickly even if they end up being #2
  3. PVTL is the only pure play investment
  4. business momentum (companies switching to a container style software deployment approach) is in our favor
  5. large companies are helping cross sell pvtl’s products


  1. amazon has gone with kubernentes (this could be a pro though because you can bet the other cloud titans will want to give an alternative)
  2. which software a company goes with is as much a political choice as it is a technology choice
  3. fast moving field in its relative infancy. Next year PVTL could be forgotten and clearly be a non-competitor
  4. The nature of new fields is to start out fragmented and then coalesce. The company with the most power is the one that gets the customer first. I don’t think that is PVTL.
  5. PVTL’s subscription growth is artificially elevated right now due to internal customers switching to subscriptions.
  6. How long will other companies cross sell pvtl’s products?

Duma, I agree strongly with your take away. Watch this one carefully and allocate appropriately. Exciting? Yes. A done deal? no

p.s. I think we will see these pure play customers like pvtl, pstg, ntnx have their turf treaded upon by the netapps, amazons, and redhats of the world. This market will be very interesting to follow over the next 5 years. I predict acquisitions in our future.


You guys are seriously overthinking this. I dug into the tech as well, but what I was looking for was not the tech itself but as it relates to Pivotal.

There are a two things we know no matter where technology goes and they are:

(1) software and development is the new factory floor for enterprises (even manufacturing enterprises) and

(2) the best practice is to abstract away as much as possible of the labor of software development. The reason for this is the more that is abstracted (i.e. made invisible and automatic in the background) the more efficient software development becomes.

Kubernetes or whatever else may follow (except machine learning and machines writing their own code, lets assume that away as that is a whole different thingy) will require the infrastructure surrounding each application to be planned, coded, debugged, and delivered, before even writing the application.

The direct Kubernetes developer is required to do all of this before even writing the actual application. Pivotal requires the developer to do none of this other than write the application. It just works, as is the motto at Pivotal.

There is a trade off as the raw Kubernetes developer has more room to customize each applications infrastructure. But this customization comes at a price, and that is the developer needs to have more skills and experience, and loses efficiency in regard to the time of creating and deploying new applications. Iterations of the applications may be easier once a developer has gone through the trouble of dealing with the infrastructure issues initially, but still, the infrastructure has to maintained, repaired, and upgraded over time. So the labor of being a raw Kubernetes developer calls for more skill, takes more time, is more prone to developer error, and less prone to organization level systematization. What one developer does in department A may be indecipherable to the person that replaces him fro Department B in the future. But if the solutions the organization are working on require such customization, then that is a necessary cost of doing business for that particular company.

In contrast, to the point that sufficient customization options are available to be used, but not necessary if you choose not to use them, the Pivotal developer will be more efficient in producing and deploying applications, as well as in maintaining the infrastructure going forward. With the added benefit that code will be systematized across the enterprise. So when the developer from department B replaces the developer in department A, they will be talking the same language, and the initial code will be less likely to be in error because of customizations made by the developer.

This dynamic is not going to change. The real issue is do you choose to create applications without the higher level abstractions, lose efficiency, but keep more control, or do you create applications with the higher level abstractions, be more efficient, and it just works.

Does not matter the underlying technology, that is the real and only question customers are asking. To the point the customer wants the latter, then Pivotal is the best company in the world to turn to.

The history of the world favors the latter choice for those choosing to create the most efficient development organizations, specifically taking into account the shortage of talent; the former will be favored by start ups and tech jocks who have the talent, who are visionary, and who do not want to be bothered by any constraints(I was like that with machine language and Assembler - GIVE ME HEXADECIMALS! I still have a difficult time working through the high level abstracted away computer world sometimes).

So no matter the underlying technology, Kubernetes, Docker, Diego, whatever, there will be customers who do not want the technology abstracted from them at the developmental level (or just enough of it), and there will be customers who want it just to work and the technology abstracted from the developmental process as much as possible (While allowing some ability to customize as necessary). Customers who do not abstract will have lessor efficiency and will require higher skilled developers (and this may be right for the specific problems they are working with), and those that abstract will have greater efficiency and can use less skilled developers, while maintaining better standardized quality control, and the same language across the enterprise.

All this talk about Kubernetes this, VMWare that, etc., I think (although very interesting and I am of course going through it and reviewing such things myself) really misses the investment point, at least at present. At present will Pivotal continue to grow its customer base, and will its RRR remain high, is the investment point. The underlying product argument is no more than the immediate previous paragraph that I put in bold.



as the raw Kubernetes developer has more room to customize each applications infrastructure these Kubernetes pros may be hard to find, and prone to leave for even higher paying jobs.

As software development explodes really good programmers, ones who can do original work, not just patch together libraries, are getting ever harder to find and ever more expensive. Thus most companies , particularly smaller ones will be pushed in the direction of not being so dependent on a small expensive ($200,000 a year?) elite.


software and development is the new factory floor for enterprises

I assume you’ve never been on a real factory floor. I don’t think we’ll see the day when anywhere from 100 to 450 or more people climb aboard a virtual s/w airplane.

But, setting that aside, there’s another way of looking at this, which I think is really more relevant.

I worked at a very large Fortune 50 company with an IT shop of over 2,500 application developers and maintenance folks. During the last 20 of my 30 year IT career I was in a position to provide advice to senior management on decisions such as this. Not once during my tenure, first as the manager of development methods and tools and later as one of a very small group of enterprise architects did any significant purchase decision ever reside with the tech staff. When it came down to large dollar outlays (PVTL products would qualify) that would impact a large percentage of the staff (adoption of PVTL products would again qualify) the decision was rarely based on the technology alone. Inevitably, ROI charts and hockey stick graphs along with financial analysis of viability of the vendor would provide the substance upon which the final decision was made.

As Duma pointed out, this is a confusing arena, there are a lot of interacting, moving parts and they come from different vendors. There is also the complication of ownership as Dell is the parent of more than one of the component parts and it’s not at all clear that they are working in concert or as competitors. Senior management (director level and above) is rarely well versed in high tech. Even if they were promoted through the ranks, they weren’t promoted due to their technical skills beyond second level management. At least not a big shop, maybe small shops are different, I’ve never worked for one. But then the primary target clientele for PVTL is big shops.

So, as much fun as it might be to dig around in the weeds of the technology, ultimately the thing the decision will be made on is the business case. Tinker appropriately bolded: do you choose to create applications without the higher level abstractions, lose efficiency, but keep more control, or do you create applications with the higher level abstractions, be more efficient, and it just works. This is the substance upon which purchasing decisions are made. The final decision maker only needs to have evidence that the product does what it is supposed to do, they really don’t care and don’t even want to be bothered with how it gets done. The “how” is uninteresting to the non-techy and explanations consume far too much time and give rise to too many detailed questions and issues that techys will argue about endlessly.

Major software products that become embedded in a company’s mainline business processes are incredibly durable. I mentioned in a different post that the company I worked for overhauled their entire mainline engineering/manufacturing systems, replacing a lot of homegrown applications with COTS. But after all was said and done they were still dependent on some legacy IMS/cobol and even a few serial batch applications (when I say “dependent” I mean they would not be able to stay in business without it). These applications were specialized in function, mission critical and there were no COTS alternatives. They were still in production when I retired 8 year ago. I’d venture they’re still in production today, maybe with a GUI, but the same application under the covers.

In some ways, the company I worked at was atypical. They didn’t design and build coffee makers. In fact they built products that were enormously complex, involved the safety of human life and had the longest life cycle of most any products built anywhere by any company.

The law of software economics dictates that costs flow from that which is closely measured to that which is less well controlled. In practical terms that means that a lot of money is spent during development (less well controlled costs), but the bulk of the cost for any important application will be spent during the maintenance portion of the life of the application. This will frequently be 10 years or more.

Why do applications live so long when technology progresses so fast? Because ripping out an existing application (and every major business process today already has s/w support) and replacing it with a new application is extremely disruptive. There may well be hundreds or even thousands of employees that are using whatever they have today. A new application will inevitably require redesign of the business process, hundreds to thousands of hours lost productivity in training (as well as the cost of the training development and delivery). Degradation of moral as people, being people, are naturally resistant to change. An explosion of costly error conditions until people come down the learning curve. Often managers of the business units get into turf battles as the new application imposes alteration of the business process, some jobs go away and new ones are created. This, of course means budgets are impacted, every manager losing people and budget is going to be resistant. And if there are unions involved, that will be another layer of negotiations that have to take place. I’ve seen this over and over again during my 30 years in IT, all of it involved with application development and maintenance.

Every senior IT manager understands this and further understands that purchasing PVTL products is not simply a technology insertion project isolated to the IT organization. The only reason an application gets developed and put in production is because it serves some business need, almost always outside the IT organization. The decision is not going to be made based on the underlying technology. It will come down to how does it serve the business in financial terms.

I have not dug into the specifics and details of the PVTL products. I spent 20 years doing that kind of work on a daily basis, when I retired I quit doing it. Further, it’s not the most relevant thing when it comes down to actually selling the products to large shops. What will drive the final evaluation will be based on whether or not senior management believes that the disruption and cost of the product will be a good investment over the lifecycle of the applications that employ it. And also, does senior management have confidence that the vendor (or some vendor) will be around to support the product over the next 7 - 10 years (more than a few times I’ve seen management purchase a technically inferior product as they had little confidence in the business viability of vendor of the technically superior competitor).

Boil it all down, I think PVTL will often win the war based on the product offering. It does what it does sufficiently well to provide an acceptable ROI within a reasonable payback period. The biggest negative I see is the messy ownership relationships surrounding it and complementary products.


Brittlerock, I can reinforce your positions across the board from a somewhat different perspective. The one qualification that I might make is that “the times they are a changing” in some respects. The move to mobile and web interfaces and rich customer and supply chain applications has become almost precipitous compared to the rate at which such new technologies were rolled out in the past. I’m not at all sure how this is going to shake out, but it certainly creates a demand for nimbleness and puts a premium on technologies which can rapidly adapt to different environments. It seems only a few years ago when the norm was developing native applications for each mobile environment and now there is a high premium on a single development environment which will provide native-like results across multiple deployment platforms.


For those who follows this deep dive, you may be interested on how Tinker ultimately concluded his deep dive into PVTL

To summarize: It eroded his convinction in PVTLs dominance


It reminded me of my own deep dive into Everbridge - and my decision to keep it on the watch list

Sometimes analysis leads us to discover that inaction is the best action


Remember, I hold a very concentrated portfolio. I do not like putting in all the work just have my 2or 5% or even 10% of portfolio double, thus giving me 20% return on it. I therefore have more exacting requirements for what I hold.

Here is the post detailing what I learned in the deep dive: http://discussion.fool.com/let-us-put-it-another-way-hope-is-not…

Dreamerdad agrees with my conclusion in regard. For most people it is no reason to sell at all. But for me, I need to get my port to where I do not end up with this little itch at the back of my head going “I can do better”. I just want to be able to do nothing other than keep adding until it is time to do something.

Those who follow me know that I had to do something earlier this year (similar to what Saul did) to get out of ANET as an example. Nothing wrong with ANET, but it no longer was the best choice to hold from my criteria. I have discussed the reasons for this before.

As I said in a previous post, I’d rather put my Pivotal money into MDB, as an example at this point. The last two days have unfortunately given me the opportunity to do so in a tax palatable manner. In the end I ended up only down 2.3% today after making the trade out, including a tax loss that was worth a lot of cash money coming April (I added that back in, the decrease was 3.5% without the tax gain) after making the trade out and buying what I wanted more today. Fortunately the lows of the day held at the time I reassessed my portfolio.

I ended up selling one stock that I did not want to sell, and now cannot buy back until September, but taxes dictate that I do it. It was a large return just for taxes saved and maybe I will be able to get it back, and bought ZS instead of Pivotal. Maybe ZS will not perform as well, but I can hold ZS without having to do anything and without some buzz in the back of my head telling me that I can do better.

Any event, the link above articulates my reasoning and I think is a good discussion point.

Again, unless you run a portfolio as condensed as mine, and unless you have to worry about tax issues (I mean I am may be paying more than 40% on any short term capital gains) I still think Pivotal may be a great investment going forward. I would not recommending selling it. I just find ZS a superior investment for the reasons I described in the above linked (and have discussed elsewhere). ZS hit my buy point and I had to jump on it. Cannot blame me for that.. I personally find MDB and ZS to be no brainer hold stocks, that at minimum will easily grow into their valuations and for the reasons of CAP that I describe in the post linked to.

So hopefully it leads to some good discussion.



dumaflotchie: And if you made it this far reading this post…

Barely and skipping most of it. ;(

It speaks to the advice that Denny has given from time to time…that we cannot know what we cannot know…therefore, just follow the money.


ethan1234: Great post. The more I read and talk to people in the industry the more I think that like you said, that nobody knows where the industry is going go with this stuff.

brittlerock: When it came down to large dollar outlays (PVTL products would qualify) that would impact a large percentage of the staff (adoption of PVTL products would again qualify) the decision was rarely based on the technology alone. Inevitably, ROI charts and hockey stick graphs along with financial analysis of viability of the vendor would provide the substance upon which the final decision was made.

I’ve been both a seller (IBM, NCR) and a buyer of IT. Back in the old days IT people relied heavily on the ignorance of management to build their IT empires. Then came outsourcing and IT people lost some of their powers. It should be clear to investors that the old saying “Build a better mousetrap and the world will make a path to your door” is not true for high tech or any other increasing returns business. This container, virtual server, hybrid cloud, whatyamacallit thing is totally confusing and trying to find an investment winner by looking at the technology is a no-go. Follow the money instead. Thanks Duma!

Pivotal’s customer base are large corporations where IT geeks don’t reign supreme. As Pivotal says, they don’t care how it works, just do it! A modern technology works so well it’s invisible, you don’t need to know how to generate and transmit electricity to turn on the lights. Just do it at a reasonable price!

Kubernetes, Docker, Pivotal Cloud Foundry are all open source. Which provider can best monetize their open source software? As an investor I think the winner is Pivotal. Look at their list of satisfied customers, that’s where the money is. In-house use of the open source software is secondary.

Denny Schlesinger


Tamhas, it appears that we are actually talking about different types of applications. Or, at least I think so. Anyway, correct me if I’m wrong, but the apps you mentioned at least sound like they are all customer facing, and to a large extent an end consumer is the customer.

I, on the other hand was referring to apps supporting mainline business processes that are integral to the business process. Most certainly, any business that has people “in the field” will want to provide a mobile front end and may have reason to deploy in multiple environments, but the stuff that runs the internals of the business I assume are still pretty much deployed with terminal based front end which may or may not be accessed via a web browser. I know for fact where I worked we certain apps which could only be accessed from certain specific terminals located in a room with locks on the door.

1 Like

Brittlerock, actually I was talking about the transition. Once upon a time my company sold an ERP system for distributors and light manufacturing. Originally, the users were all inside the company and the focus was supporting core business applications. One of my better accomplishments was taking a company to 10X in sales with no addition of administrative staff and only about a 30% increase in warehouse staff entirely through productivity gains … while providing the administration with greatly enhanced management information. But, in the last few years I was selling this product and since, there has been a major thrust outside of the company. We saw this at the same company way back when by adding a web ordering system … then a small minority of sales because the vast bulk of sales were through local brick and mortar, but a crucial addition for people in remote areas. Plus, we were doing a lot of supply chain automation which became the bulk of orders, invoices, and confirmations for major customers. (BTW, at the time, Amazon was really terrible at this) Since then, the growth of customer facing and supply chain applications has mushroomed. These are not core business in the sense you are talking, but they are core because they have become essential for many businesses in how they interact with the outside world. Indeed, my consulting practice for some years has focused on people with legacy systems that won’t support this kind of interaction and how to modernize those systems so that they will both better support the internal applications with functions such as rule-based systems and workflow and enabling the application to interact with mobile, web, and other remote systems.