VMWare + AWS > NTNX ? What about PVTL ?

Starrob posted a couple of articles over on the Premium AMZN boards that I found interesting with respect to the various recent discussions about NTNX and PVTL and how/where they compete with VMWare or AWS

The first:
https://www.zdnet.com/article/aws-announces-amazon-rds-on-vm…

announces that AWS’s RDS (database cluster as a service) is now available for VMWare private clouds. I haven’t found any technical details on how this is implemented yet, but from the article it sounds like they’ve taken the basic automation they wrap around MySQL and PostrgreSQL database clusters and ported it VMWare to enable IT departments and SW dev groups the same ability to stand up relational database clusters on-site as easily as can be done in AWS.

If this is the case, then I expect that RDS is merely the first of many AWS “services” we will see ported to VMWare private cloud infrastructures. This is something organizations like OpenStack have been promising for a long time, but have failed to deliver convincingly. In my opinion, the real future of “clouds” is when something as easily used as AWS is available in private data centers!

The second article discusses the RDS service more, but then dives into VMWare and it’s views of hybrid clouds, the edge technologies, and how IoT fits in as well.

https://www.forbes.com/sites/jasonbloomberg/2018/08/30/vmwar…

One of the concerns I have with companies like NTNX is that they are selling a proprietary private cloud solution. AWS is by far the largest and most successful public cloud, and they offer a more secure “GovCloud” implementation which is lagging the commercial cloud by a few years feature-wise. But as I mentioned above, I think the real future is in the private, on-premises cloud when companies are able to re-organize their existing data centers when technologies like Nutanix and VMWare are combined with software-defined networking as offered by ANET.

The big thing that AWS offers currently, are all their various “services”. Things like API Gateways, Lambdas, RDS, DynamoDB, Elasitcache, Route53, S3, CloudFront, etc. All these “services” are merely API the infrastructure engineer and software developers can plug together in seemingly endless combinations much like LEGO blocks. This amounts to “serverless computing”, where the underlying infrastructure of an application is at a much higher layer than a “server” running an Operating System. No longer do I need to worry about patches or updates or security vulnerabilities of specific packages, applications, or libraries. I can focus on the data flow of my applications and glue the components together as necessary.

When that capability is available in my own data centers, I no longer need to rely on public clouds! And many applications are beginning to be developed in exactly this manner. Serverless is the future. Services are the future. Containers are merely a stop-gap mechanism on the way to serverless! Which ultimately makes Docker and Kubernetes something to avoid if at all possible.

Nutanix seems to offer a proprietary means of building a private cloud. And it seems to have a fairly reliable product. But it’s the age-old problem with vendor lock-in. I can’t take NTNX’s solutions and simply move the application design entirely to AWS, or Google Cloud, or Azure. I can use their solution (in theory) to deploy my applications across any of those and my data center. But I’m limited to the APIs Nutanix offers me, and if it has hooks for AWS’ API, I can launch an AWS service into AWS, while deploying my data center components to my data center. I can’t use the AWS API in Nutanix to deploy an RDS cluster to my data center. But now, apparently with VMWare, I can!

So, where does PVTL fit in. I don’t know a lot about them. But from reading the posts on this board, they seem to offer an abstraction layer to deploy applications into any public cloud. Which is similar to what products like Terraform from Hashicorp offer. Pivotal’s offerings sound a little more advanced and polished, but essentially the same thing. If that’s the case, I would expect them to be able to very quickly develop an abstraction layer for VMWare as well such that developers can create an entire application infrastructure model, deploy it to AWS, or to a private VMWare on-prem cloud as well, or a hybrid solution if called for.

With AWS and VMWare pairing up on AWS service deployments to VMWare private clouds, I see the bigget threat to Nutanix. The will become an increased threat if AWS is able to begin packaging up more of their services as VMWare API calls wrapped around AWS developed automation layered on top of privately owned, VMWare managed hardware in private data centers.


Paul

17 Likes

Hey Paul,

Great thoughts, the hybrid space is a pretty interesting (and combative) one! Dheeraj from Nutanix references ‘Hybrid wars’ a couple of times from memory.

In my conference call summary above, I mentioned my theory (not really a theory more a ‘it has to be this way’) where Nutanix builds out the AWS stack in Nutanix, and my concern about the amount of development required. They’re starting this with Xi Cloud, but also components like Era for databases.

VMware similarly has to do that.

Dheeraj mentioned that they’re aiming to solve the problem of getting legacy apps to the hybrid cloud, rather than shifting public cloud to private cloud and vice versa, and that theres a big chunk of legacy to displace. Which makes sense if you think about all the massive private datacenters running company-specific workloads in the world.

“I can’t use the AWS API in Nutanix to deploy an RDS cluster to my data center. But now, apparently with VMWare, I can!”

Yes, thats how I read it, which is an interesting position for VMware, but I’m not convinced it will make much of a difference.

I think the sales spiel would go something like:
Nutanix/VMWare → “hey, we’ve got a cool new cloud OS, so you can manage your datacenters like they’re running in AWS, great UI etc. And use the public cloud from the same interface. One click!”.

As a CIO, thats a big shift to what I’m dealing with currently with my private datacenter.

The added “you can use RDS tools in your datacenter” I don’t think would be that interesting. I’ve already got sorted out my private databases, backups, disaster recovery, etc to run the current load.

And the Nutanix rep would say, “Why use RDS tools? You don’t at the moment, just hook your build scripts up to Nutanix’s tools”. So the only thing I’m actually missing is the RDS bits (auto backups, versioning etc), and at that point, the Nutanix rep would start talking about Nutanix Era.

Interestingly, this question was asked during the Q4 CC… my coverage looked like:


**Q:** How you view the hybrid cloud strategy of public cloud providers? eg: VMWare with AWS (RDS + VMware now), Azure stack, GKE on-premise.

**A:** Switzerland of servers. Design will win. Innovation will win. [GD: lots of words, but not much meat]

But yeah, Nutanix have a lot of development to do to make the vision a reality.

cheers
Greg

4 Likes

http://discussion.fool.com/why-vmware-needs-pivotal-33996459.asp…

Here is a thread on the issue of what about Pivotal that we had last week on NPI board.

Tinker

1 Like

Interesting, thanks for sharing your thoughts. I have a few questions about one of your statements:

Containers are merely a stop-gap mechanism on the way to serverless! Which ultimately makes Docker and Kubernetes something to avoid if at all possible.

How quickly will this happen? How does one operate without them until “serverless” arrives in a economical easy to deploy and manage package?

For those that have gone this route with a cloud foundry, how long will they hang on to what they have?

Who is the investable winner in “severless”?

Hi Hydemarsh,

Your last question is possibly the most difficult to answer for me. Partly because I have no idea how good the serverless offerings are from the other cloud providers. And I’ve only seen the tip of the iceberg from AWS.

Containers are merely a stop-gap mechanism on the way to serverless! Which ultimately makes Docker and Kubernetes something to avoid if at all possible.

How quickly will this happen? How does one operate without them until “serverless” arrives in a economical easy to deploy and manage package?

Again, I don’t honestly know. I think it will much like the shift from mainframes to datacenters filled with individual, single task systems. In other words, decades. And even then, like we now still have a lot of mainframes hanging around, we will still have static data centers filled with “traditional” application servers.

The issue at hand here is a complete paradigm shift in how applications are developed. Today, most applications are these extremely large, monolithic and tangled masses of code that require significant resources. There aren’t too many developers out there who understand micro-services, or how develop and debug them. But with providers like AWS offering services like API gateways, CDNs, Lambdas, and various databases that require little little configuration, and are trivial to deploy, I expect serverless to catch on quickly.

Also, I didn’t mean to imply that companies should avoid containers, simply that long-term, I think their necessity is limited. At least by the end-user. I can certainly see a longer-term use for them by cloud providers directly. In otherwords, containers and container management systems could well be instrumental in delivering serverless frameworks. I as the developer or infrastructure architect may have no idea that my serverless compute engine is really a series of Docker containers spun up by a Kubernetes cluster, but that may well be how AWS chooses to build things down the road.

But for the time being, containers and kubernetes will be an essential stepping stone to serverless until developers figure out how to write serverless applications.

As for "easy to deploy and manage package"s, it’s pretty simple now. Let’s say you have a standard 3-tier web app with a front-end UI, a mid-layer “application”, and a back-end data base. Take all the static web content and stuff it in an S3 bucket pointed at by a CloudFront configuration for distribution. Create a small configuration for an API gateway with the API mappings pointing at the relevant static content files. Since the majority of the “static” content is really JavaScript, these files, as well as the API gateway mappings in turn point to other API gateways and/or Lambdas, which are nothing but very small chunks of executable code. The lambdas in turn can point to other lambdas or API gateways as well as the back-end databases.

All of this is packaged up in the same repository you store your code in. So, in the Git repo (or whatever version control system you use) right next to your src/ directory, you have a deploy/ directory that contains all the code to deploy that chunk of application code, whether it be an API gateway, Lambda, EC2 instance, Docker container, etc. Since all your code is located in the same place it becomes trivial to have your continuous integration/continuous deployment (CI/CD) system constantly checking the code out, deploying the infrastructure and application, and then testing it or releasing it for the end-user to experience.

Products like GitLab and Terraform make this fairly trivial today. I don’t know anything about Pivotal’s product, but from the sounds of it, they too make this fairly trivial. And from what I recall about Nutanix’s product from a demo they gave my group a couple years ago, they too allow you to model everything pretty simply.

The real question is, what does the standard modeling language become, and how can we invest in that? I think it’s entirely too early to tell yet. This is like the advent of the railroad. Everyone has their own gauge of track, and shifting between them is painful. At this point in time I can use Terraform to model my application for AWS pretty trivial. I can also use Terraform to model my application for OpenStack, Azure, Google, and a bunch of others. But for each provider I want to run my applications on, I have to entirely re-write my cloud infrastructure code because no two cloud providers offer compatible APIs.

That is why this move by AWS and VMWare is so intriguing to me. It seems like the first major attempt to extend AWS’s APIs into a non-Amazon location. If can transform my datacenter into an AWS API compatible environment, and get my users used to thinking in this manner, it then becomes trivial to move them back and forth between AWS and the data center. I can have them start using AWS for development and QA, while reserving the data centers for production only. As a business, this moves all of my development and NRE costs to an OpEx model while reserving any major CapEx for paying customers.

For those that have gone this route with a cloud foundry, how long will they hang on to what they have?

I have no idea. Serverless and cloud-native applications are incredibly malleable and flexible. One project I worked with which was almost entirely serverless except for the very last link which required heavy duty compute cycles re-architected their infrastructure and application more than a dozen times in two years. A lot of that was in response to new offerings by AWS. But there was never a “major re-architecting”. It was more piece-meal. For example, as things like SSM Parameter store came out, they’d scrap a bunch of code that did something similar but worse, and re-write to use that. That sort of thing happened constantly. Is the constant re-architecting/re-writing likely to slow down because the AWS offerings aren’t being released as frequently? Or is it likely to speed up as they figure out more ways to leverage existing offerings? It’s impossible to know, and highly dependent upon each application and development team.

Who is the investable winner in “severless”?

My money is on Amazon over the long-run. They have the first mover advantage and are only increasing the number of offerings available to enable their customers to move in that direction. But ultimately, I think of the cloud providers like AWS, Google, etc. as not the huge winners. They’re going to be like the networking companies of the late '90s/early '00s. They’re building the infrastructure everyone else will take advantage of. The biggest winners will be those companies which develop for the cloud, serverless or not, which can move and react to their customers faster because serverless exists. Just like Netflix and Facebook and now SHOP, AYX, OKTA, NRLC, and many others are on fire because of the cloud, so too will companies that don’t yet exist come to the forefront to deliver things we don’t even know we need yet because they aren’t possible without serverless.

Ultimately, the big winner is the consumer. And the investable winner is whichever companies deliver what the consumer needs or wants.

But if Amazon can become the standard API, and VMWare can leverage that into the datacenter, those two will end up being huge winners. However, if Amazon can standardize their API for VMWare, they can do it for Nutanix as well :slight_smile:


Paul

11 Likes

Hi Greg,

Dheeraj mentioned that they’re aiming to solve the problem of getting legacy apps to the hybrid cloud, rather than shifting public cloud to private cloud and vice versa, and that theres a big chunk of legacy to displace. Which makes sense if you think about all the massive private datacenters running company-specific workloads in the world.

Absolutely! I think private datacenters are going to be like mainframes. They’re going to be around for decades because there are so many of them, and so many legacy applications that can’t be easily re-written for the cloud quickly or securely. Cloud development takes an entirely different mindset than what was used to develop those applications. Therefore it will be a complete re-write, and companies NEVER want to do that.

The added “you can use RDS tools in your datacenter” I don’t think would be that interesting. I’ve already got sorted out my private databases, backups, disaster recovery, etc to run the current load.

From a production perspective this is true. But how much of that is entirely automated? None of it. There was a team of system engineering folks working with the DBAs and networking and storage folks to get all the widgets lined up and wired correctly, and then it was released to production for the application developers to “use”. It’s a static thing. Sure, you have back-ups, and a DR plan. But how often are either of those actually ever tested? In most companies, never. They have them. They work in theory. They may have tested them once or twice, but in reality, the business is never willing to test these things because they require downtime and having a parallel hardware setup specifically for testing the DR process is entirely too expensive. Though I’m sure in the financial industry, and elsewhere, there maybe a few companies who are actually willing to test this stuff frequently. But they’re the exception, not the rule.

Whereas, in a cloud environment, it’s all code. I can deploy an entire infrastructure including data bases in minutes. And I can destroy it in minutes. And rebuild it. And I can do this all day long! I have an entirely encapsulated environment in which to test every single little change to my code whether it’s in the core application or in the infrastructure itself. And I can build automation around this such that every time I commit a change, my CI/CD system notices, deploys everything, runs automated unit tests, and regression tests, and performance tests, and then tears everything down. And every developer on the team can have this isolated, encapsulated, private development environment. And, it’s IDENTICAL to production! In fact, I can even deploy it into the same location as my production environment for A/B testing on REAL data!

To me, that’s a HUGE win. It means my developers can move faster, eliminate bugs sooner, and identify issues with the code, design, security, infrastructure, etc. long before the customer is ever exposed to anything. And I can then do things like use a Chaos Monkey like tool to randomly knock pieces of my infrastructure offline to see what happens and figure out how to respond to that sort of outage. I can’t do that in today’s traditional datacenter.

But if I have a datacenter powered by VMWare, and Amazon enables their API for such a datacenter, well then, my private data center just became API compatible with AWS. And I can move all my development to AWS and lower my CapEx for development and QA datacenters by eliminating them entirely, and when the applications and infrastructure are ready, I can deploy it directly to my private datacenter in exactly the same way I deployed to AWS.

I think this will be a huge win for businesses in the long run. Static databases are the bane of existence for businesses. They’re as bad as mainframes. You can’t ever do anything with them, you can’t change them, you can’t test anything new without major disruption of the entire engineering team, and they’re expensive as hell! Why wouldn’t you want something fast, flexible, and cheap that each developer and QA person can play with all the time?

A: Switzerland of servers. Design will win. Innovation will win.

Absolutely! In the end, design and innovation always win. Which is why, if AWS + VMWare can design something that’s entirely seamless to move between cloud and on-prem, they will win. We will have private datacenters for decades to come. But they will have to become more cloud-like and more flexible. And the companies that can deliver that ability will be the huge winners. Proprietary and vendor lock-in solutions will get passed by. As a business owner, I want a standard cloud API that works across all cloud providers whether that cloud is AWS, in my data center, or on my development laptop (which, btw, is one of the things that makes Kubernetes so attractive!)

In the long run, I see the big winners as those companies who can leverage the cloud to their best advantage in order to deliver their wares to their customers faster and cheaper. Companies like Netflix are the big investable winners in this. Not the people building the roads.


Paul

9 Likes