Pivotal: A Discussion with Steppenwulf

A Discussion with Steppenwulf about Pivotal

I recently had an off-board discussion about Pivotal with Steppenwulf after he expressed some concerns in a board post. I thought I should post the discussion for all of you, as it seemed useful, and he concurred. This is all shortened and edited. Here’s the post that started it:

Steppenwulf sounds more doubtful
I’ve been been meaning to post about Kubernetes, which is getting really big traction across the board. Good that Pivotal got there in time, and is a leader in the space - but have to watch how the Kubernetes devel tools work out - RedHat is doing well, and though RedHat is growing a bit slow for this board, I may make a small investment, to make sure I keep an eye on it.

MongoDB is making a huge bet on Kubernetes (naturally, since it is a container that can hold a MongoDB), and also has a strong working partnership with RedHat - they are getting growth out of that relationship.

I don’t think VKE is a threat to Pivotal for a few reasons:

  • VKE is a public-cloud-only offering, and I can’t think of any reasonable use case why an enterprise company would be interested in a public-cloud-only Kubernetes environment - they could just use a cloud titan Kubernetes offering directly

  • The likely VMWare intention (just my guess) is that a major sales pipeline will be VKE rolled up into PKS as add-on offering, which simplifies multi-cloud deployment by PKS. It would make a lot of sense for VMWare to ask for some payback from Pivotal for the sales help VMWare provided to Pivotal in the past, and are still providing - and it makes sense for customers who want multi-public-cloud and it would probably be accretive for Pivotal.

Concerning Kubernetes, the picture is more confused. The base Pivotal product runs code. So thing you can’t put into a base Pivotal container is data. That is outside in some other managed system, and you point to it via configuration from your Pivotal container. Of course PKS manages Kubernetes containers so it can manage data containers that way. But Kubernetes is a bolt-on for Pivotal - they will need to dance fast to make sure they stay at the cutting edge.

Pivotal still has the advantage in that it is by far the best tool for improving developer productivity for cloud-native applications, and that it is truly vendor independent. But if enterprises feel they need to have at least some Kubernetes in order to host data work loads in the cloud, then other Kubernetes tool sets and new startups with newer containers will have a chance. I’m going to be keeping my eyes open

I asked him about it
Two months ago when you wrote up Pivotal you seemed to feel they had won the war. In this last post you seem to discuss them with much more ambivalence and doubt. Things must be evolving incredibly rapidly in the space. Do you still have faith/conviction in them?

He answered:
I still think they have won the war in code deployment on the cloud, but things can evolve quickly in tech. The fact that the base Pivotal product doesn’t support databases is a weakness I didn’t think much about. There are a few questions here:

  1. Will Kubernetes continue its growth - it is also growing fast from a tiny base, so this could blow over as we get to enterprise.

  2. Will Kubernetes become recognized as something that enterprise needs, in order to provide cloud neutral database work loads?

  3. How will Pivotal do with PKS? Current Pivotal customers are using it - but will it be adopted by a big part of the Kubernetes crowd that aren’t currently Pivotal customers.

  4. How will Pivotal play the data loads on the cloud? Remember that Pivotal has huge depth in data, and their team were major developers behind Hadoop (by the way, don’t believe the stories about the death of Hadoop - every major cloud data play involves a lot of Hadoop tech, HDFS is ubiquitous, and Spark is just another library that runs on Hadoop File System - this is just a pause as the Hadoop tools get easier for people to learn and use).

For a quick answer to your question - I haven’t changed my opinion of Pivotal at all for the next few years - it is the 5-10 years down the road where I’m looking at increased risk. My core competency is as a solutions architect, and what solutions architects think about, every minute, is risk vs reward. I’m always looking for risks, and evaluating it against my thesis and rewards. But if I actually thought the thesis had changed or broken, I would simply say SELL - and I don’t say that at all - I think Pivotal will be great for the next couple of years at least, but I want to watch Kubernetes closely.

I again asked about his ambivalence:
Thanks Steppenwulf, It now sounds to me as if you are on the positive-but-keeping-my-eyes-open side of ambivialent, having come down from super-enthusiatic about Pivotal. Correct me if I’m wrong.

He responded:
I’m still super-enthusiastic over the mid term - say 2 years. My question has always been when and how do they get to the midmarket? They can’t shoot up forever at the same rate unless they have a strategy for that. They have to start connecting with the mid market in say the next couple of years.

I had a blind spot - I didn’t think about a solution for a database. Pivotal needs a good solution for this. Logically this should not be too hard for them, but I don’t know how easy it would really be. Alternately, they can win in the Container market - no idea who is going to win in the enterprise container market yet, as it is tiny. Of course the smart thing would be to do both.

So, while their trajectory in the large enterprise market is set and they have no competitors, how their product development goes around data will help us to know if they can stay on top.

I hope this discussion is of help,



Steppenwulf - How do you propese containeraizing data???

I mean the point of container, as i understand, is to be quickly deployable near standard configurations that rarely change in time (Web server, app server etc)

We have been thinking about containeraizing data, and it has always been as a seperate entity.

For MSSQL the binaries would be a container (and the config) but the data itself would remain in Filesystems.

Or are we going past eachother? :slight_smile:

  • MK (SQL DBA)


I think Pivotal’s move away from Services revenue (Pivotal Labs) will help greatly with their mid-market sales. This move is risky if they don’t execute it right, but if they do, I believe it will help them innovate faster and lead longer term (5-10 years)

They can partner with some of the best consulting firms in the world (who have relationships with small, mid, and large sized organizations) to “outsource” the services side; Pivotal Labs which are their tools to help organizations transform from waterfall (old, slow method) to Agile (new, faster, cheaper) product development.

Pivotal can then dedicate more resources (R&D, employee positions, scale, etc) into developing and updating PCF and stay on the leading edge with new software.

That is their high-margin, stickier software subscription side of the business which is their true differentiator.

I’m sure they will keep a few transformation consultants to partner with their soon to be consulting partners and make sure they are helping implement Pivotal Labs properly as well as build relationships and education to lead to more PCF customers.

Bottom line, we need to monitor how this transformation goes. If it goes well, upside is very very substantial. If it goes poorly, they could be an average or less than average company.


I also think, Pivotal has intentionally caped the pace of its own softwatre growth to make sure they don’t get ahead of themselves and maintain their reputation.

The partner model has the potential to unlock even faster growth over the next 5-10 years.

1 Like

I would like more specifics on this data container problems. Talend, as an example, is trying to disrupt data economics by use of migrating data into and through containers. At their Talend Connect conference they demonstrated cutting to 1/87th the cost of migrating data through the use of containers. The containers I believe were Dockers, and Dockers work with Kubernetes, but the containers can be of any type.

You just don’t put data into a container with Big Data, it is a difficult process and data does not just stay in the container, it is dynamic and constantly delivered to where it needs to go.

I have a very hard time believing that Pivotal is not on top of containers of all kinds, every kind, where they were, where they are, where they are going, and why, and why not. I find that to be utterly unbelievable.

I also find it utterly unbelievable that Pivotal is not already working on such things and that it is not able to produce.

What I do find as an issue, as I have looked into containers that as they become better and better, they enable more and more agility. Not necessarily the same agility that Pivotal provides, but they close the gap more and more. That is a potential longer term issue for moving mid-market.

Docker is not standing still, and although they are working quite well with Kubernetes Docker is not waiving any white flag either.

As things stand now, it appears that when given the choice legacy applications will be moved to Kubernetes and new applications will go through Pivotal. Pivotal through PKS can of course support the entire process and make it easier. So Pivotal is in on these conversations with their customers and it is a no brainer for a Pivotal customer.

However, when it becomes data into a container, that is a different deal. Here you have the data integrators. You have the legacy players of which Informatica is the leader, who is also now the leader in iPaaS (ie, what Mulesoft does), doing things in a proprietary manner that is not as cost effective but with huge and large installed customers bases, and innovating within that, and then Talend, much smaller but winning with Big Data (like Hadoop) and Cloud integration.

I do not know the connection between Pivotal and these integrators other than Talend has spoken about for awhile and demonstrated this disruption of data economics using containers to game transactional costs necessary to move large amounts of data into the cloud.

The data needs to get into the container, and the integrator does not create the container, the customer does, I do not see why managing such a container would be much different than an app when it is the integrator batching or streaming and cleaning this data into the container?

But above my pay grade.

If SW is that concerned about it (and I am in regard to containers closing the agility gap with Pivotal over time (not catching up, but just closing the gap) I am not clear as to why since the hard part is getting and managing and cleaning the data to begin with, and this is handled by the data integrators. The container makes for a disruptive manner to game data costs.



more on Dockers and containers


The Only Kubernetes Solution for Multi-Linux, Multi-OS and Multi-Cloud Deployments

Docker Enterprise Edition (EE) 2.0 is the only enterprise-ready container platform that enables IT leaders to choose how to cost-effectively build and manage their entire application portfolio at their own pace, without fear of architecture and infrastructure lock-in
note the use of the word “only” here

VM hypervisors, such as Hyper-V, KVM, and Xen, all are “based on emulating virtual hardware. That means they’re fat in terms of system requirements.”
Containers, however, use shared operating systems. This means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This means you can "leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application,


The key difference between containers and VMs is while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.
This, in turn, means one thing VM hypervisors can do that containers can’t is to use different operating systems or kernels. So, for example, you can use Microsoft Azure to run both instances of Windows Server 2012 and SUSE Linux Enterprise Server, at the same time. With Docker, all containers must use the same operating system and kernel.

This is getting way too complex for me, I will just follow the money…


I saw this statement on twitter today from @msuster:

“From (my) vantage point of being able to see hundreds of companies, good and bad I have some advice for founders - get to know and love “gross margin”. Revenue doesn’t pay your bills, GM does”

I think this statement is really important, a company can be growing revenue just to grow revenue and show growth, whats important is the gross profit growth.

Bringing this back to Pivotal, here is what we have:

Gross Profit

2016 $94

2017 $182 (+94%)

2018 $281 (+54%)

May 2017 62.2

May 2018 96.4 (+55%)

So this last quarter PVTL grew gross profit by 55%, which was slightly better than all of last year. Also, the gross profit for the May quarter was better than all of 2016!

This is the metric I’m going to be watching on PVTL. I think it shows a better representation of how the business is doing versus revenue growth.



There comes a point in investing where one begins to overthink things. To me it comes down to a simple thing, will containers (meaning Dockers or Kubernetes, as I don’t know if there is any other real game in town) are made to use system resources more efficiently and to enable more agile development.

Pivotal is created to create agility and then enable whatever resources that exist in the target destinations to do their things without the developer having to think about it.

Containers make you have to think about it (although it abstracts much of this as well). If containers can get closer to Pivotal, it takes a bit away of what makes Pivotal so pivotal to agile software development. Absent that, and that is a long time coming as Pivotal is always moving as well, containers will always need to be further abstracted away.

As things stand now Pivotal is enabling for its customers, who are still bragging about it (for sleeping pleasure viewing I have started the 1 hour and 10 minute presentation of Home Depot’s story with Pivotal. The presenter is going month by month as to what happened during that first year from 0 to 1000 apps. Instead, however, the presenter had to change the title to from 0 to more than 3000 apps) to become as agile and productive as any tiny start-up with coding jocks and super geniuses.

In fact, arguably, Pivotal enables these companies to do even better. As the start ups have no time to select the best of the best solutions, and instead lock themselves into one way of doing things, often by happenstance, often by trial and error, and often by strategic thinking, and use methodologies that may not even scale as they become larger or more complex.

In this environment, the customers would already have chosen containers over Pivotal, if that is where they wanted to be.

As Mauser said, follow the money. Pivotal is not going to allow itself to somehow become antiquated by failing to enable better data processes either, particularly since data is at their core.



Just to (attempt to!) clarify containers/Kubernetes:

The whole point of containers is to run across multiple servers. Kubernetes doesn’t ‘do’ containers (as compared to Docker et al), it deploys/manages them across multiple servers.

Docker and Kubernetes are not comparable but very related. Docker Swarm is comparable to Kubernetes, but I think Docker have ceded that battle to Kubernetes.

Kubernetes abstracts all of your servers and virtual machines into a single big computer, so you don’t think about how many things are running and where, you just know its running on your ‘Kubernetes cluster’ (aka big computer).

Note that Kubernetes doesn’t deal with the infrastructure set up, so you still need a method of setting up the infrastructure (ie, the number of servers, networking, security etc that makes up your big computer).

Re: Data. Containers don’t deal with data. Containers are stateless, because they get created and destroyed at the drop of a hat. Big spike in traffic? Create some more containers on your spare server capacity! No one using the application? Shoot some containers! The point is, you don’t care how many containers are running, or on what.

All container management systems abstract the data storage. Kubernetes lets you ‘mount volumes’ (geek-speak for ‘add a conceptual hard drive’) which you then hook up to a previously configured datastore. For example, you might have set up an Amazon Elastic Block Store (EBS) volume (which is AWS’s abstraction (more abstractions!) for a hard drive).

With respect to databases, they have to talk to persistent storage (eg: store all your database files on an EBS volume). Or you just hook your containers up to a cloud database (eg: RDS etc) and let the cloud provider handle all that… abstraction!

The challenge with Kubernetes is there is a big learning curve. It’s definitely (expensive) geek world. We took about 4 months solid work to move from a cloud orchestration provider (Cloud66 - which I believe is similar to Cloud Foundry) to Kubernetes. If the people that know how to use Kubernetes leave, that would be a solid pain in the ass.

None of this is particularly relevant to Pivotal as an investment (follow the money!), but maybe helps clarify the relationship between Kubernetes and containers.




You describe exactly why software in the end will standardize on a few worldwide systems. Pivotal is the most complete such system that even simplifies the containers that were made in the beginning to be simpler than virtualization. Making Pivotal a prime candidate to be one of these standard systems that every developer and IT department will know how to run and not be hostage to a few IT pioneers who built the system, failed to properly document it, and have now left the business in the good hands of their successors who have no idea what the heck is going on in the system.

In regard to data, which was the specific topic that started this thread, I talked about data integrators. I spoke of Talend trying to disrupt data economics using containers. What they are doing is enabling “serverless” computing in the cloud to enable data integration and cleaning.


What this does is enable a Talend user to also use QuBole to deploy, balance, manage, etc., ie, visualize away all the infrastructure issues so you can focus simply on setting up your data flows, get rid of on-premise equipment, and it just works. Seems to be a common disruptive theme and for good reason.

This link is an example of what I was previously discussing. Pivotal does not need to specifically address this as data and applications are not the same, and what is necessary is ot the integrating of the data for Pivotal, as companies like Talend or Informatica will handle that, but the linking of data, once cleaned and integrated, to the application. What Talend proposes is that the use of containers can decrease the transactional costs of huge petabytes of data by 1/87th of the existing costs when you allow a resource like Qubole to manage it, including the ability to run your data through whatever the discounted not used resources are on cloud titans (apparently the cloud titans will have fir sale capacity that can be used as well if you know how to do it,or something like that) thus enabling far lower transaction cost for data.

Given this capacity out there, I am not sure what SW is talking about in regard to Pivotal’s short coming. A Talend does not handle applications, or what you do with data, it gets it ther and it cleans it, from there I am not sure why you cannot have our app direct where the data from the container is data lake or wherever is to be found and extracted.



<<<Docker’s solution

There is no getting away from it, data needs to be externalized and persisted outside the container and not maintained as a ‘container layer’. To address this problem, Docker offers directory mounts, named volumes and volume plugins.>>>


This article makes my point. Data, since it is not stateless but stateful exists outside of the container, even though the container can be used to facilitate delivery of data using spot bidding on data networks to increase efficiency.

Following from Dockers certainly there are things that can be done, that do not seem to be that out of the question for Pivotal to facilitate as well as part of the infrastructure, but the data strategy appears to me that it cannot be completely abstracted away but will need to be identified as part of the app, or as a necessary option in the delivery of the app to the cloud.

But I will leave it to others. Obviously I do not mind trying to get into the technical aspects, but really the best I can do is follow the money and simply learn enough to try to understand things at a higher level.

From an investment perspective, in the context of this conversation, it does not appear to be a container issue per se, nor one that pivotal cannot address, so with this in mind, follow the money.



Pivotal still has the advantage in that it is by far the best tool for improving developer productivity for cloud-native applications, and that it is truly vendor independent.

Why is it “by far” the best tool? Is it because PCF delivers app environments through the Open Service Broker API in such a way as to insulate developers from the cloud underneath? If that is the reason, then that’s not a very wide moat.



Pivotal is created to create agility and then enable whatever resources that exist in the target destinations to do their things without the developer having to think about it.

Containers make you have to think about it (although it abstracts much of this as well).

Container technology has been around since long before Docker. Pivotal Cloud Foundry uses containers, just not Docker containers. The genius of Docker is that they made containers usable by regular developers as opposed to Unix/Linux system programmers. I believe PCF’s use of Open Service Brokers is their differentiator. Brokers give a developer a way to access cloud infrastructure in a cloud agnostic way. Developers can spend less time learning the nuances of a specific cloud provider and more time developing applications.


data does not just stay in the container, it is dynamic and constantly delivered to where it needs to go.

This is partially true and partially false. A lot of data is static. It is not “delivered to where it needs to go” so much as it is accessed by applications that need to use it. There’s a big difference between delivery and accessibility.

Given that distinction, I don’t know how this plays against containerization. I’ve been away from the technical details of this stuff for more than 8 years now and that’s a choice I made when I retired. I have neither the time, inclination nor interest to delve into the minutia of IT. Nevertheless, I will read the observations of other who do dig into this stuff so far as I think it relevant to my investments. But, like Saul, I think a deep understanding of the product offerings is not critical to making good investment decisions. At the same time, I try to remain alert to the potential for disruption as disruption in rapidly changing IT landscape is omnipresent.