Nutanix is an AI and Machine Learning Company

I’m trying to better understand Nutanix for an article I’m writing for TMF. Wanted to share a blog post from its blog that helped me understand what benefit Nutanix truly provides.

Nutanix’s Enterprise Cloud OS manages customers’ private and public clouds to make it feel like one-single cloud. But it goes further than that. Using AI and ML, Enterprise Cloud OS constantly scans for the best place to host business critical applications and scans for degraded nodes and if found, won’t allow critical services to be hosted on that node.

What does this look like in action?

This year on black Friday, Macy’s and Lowes had IT glitches that slowed purchasing activity majorly. This probably caused them to lose a ton of business. Neither were using Nutanix. If they were, Nutanix’s Enterprise Cloud OS would have detected the issue and hosted those critical applications somewhere while it simultaneously tried to fix the IT issue itself using AI and ML again.

https://next.nutanix.com/blog-40/the-cloud-os-awakens-a-new-…

There’s always a chance we have Nutanix completely wrong, but after all of my research for this article so far, I’m expecting Nutanix to blow expectations out of the water.

As I’ve read more about Nutanix’s transition away from zero-margin pass-through hardware and to a 92% software revenue company, I am more and more impressed by management and the way they have executed.

Nutanix now has the bandwidth to focus on their most important, highest-margin software products which has set the company up for long term success.

I’m holding on to Nutanix.

30 Likes

That’s an interesting post. thanks for sharing.

While I am convinced on NTNX story and has sizable holding, I still worry about a couple of things:

NTNX as Cloud OS that transcends private and public clouds itself is very strong value prop. However, to my understanding, all three major cloud providers have aggressive plans to address this with their own offering… Google Cloud, MS Azure and Amazon AWS. In many cases, they may end up discounting or even offering for free, what amounts to the “cloud OS”… ofcourse probably inferior quality for now but would increasingly enhance.

On AI / ML, every company will talk about AI / ML in a way that is very convincing… at the end of the day, it will be a layer (although a critical one) among the stack of things in the companies’ offering (except may be for NVIDIA and Google to whom it is already a huge source of revenue.)

Last but not least, I do worry about VMWare against NTNX. It is a direct competitor, incumbent and on this board, no one really mentions how VMW is really trying to combat / one-up NTNX.

I dont claim to know depth of these things and have a good judgment, probably thats why i am still holding sizable NTNX but not going all in (a mid size position)… Certainly valuation is too low to stay out but I will need to understand some of these things before either doubling down or bailing out!

4 Likes

While I am convinced on NTNX story and has sizable holding, I still worry about a couple of things

Regarding the three big cloud providers, Nutanix allows its customers to avoid lock-in with a specific cloud provider. So while the Big 3 may try to enter the business of Nutanix, they won’t offer cloud agnostic systems. I find it unlikely, but not impossible, the Big 3 would offer a product as robust as Nutanix.

As far as VNWare, they are certainly part of the conversation when talking about Nutanix. However, in that regard the two companies are basically a duopoly with their HCI solutions growing at or above the fast growing market (if you are to believe what VMWare reports as segment sales). Either way, both are growing quite well and I don’t see this is a problem…not winner take all.

Just a few thoughts…

A.J.

3 Likes

CMFALieberman writes:
This year on black Friday, Macy’s and Lowes had IT glitches that slowed purchasing activity majorly. This probably caused them to lose a ton of business. Neither were using Nutanix. If they were, Nutanix’s Enterprise Cloud OS would have detected the issue and hosted those critical applications somewhere while it simultaneously tried to fix the IT issue itself using AI and ML again.

That’s not what Nutanix claimed. Here’s what they actually said in your link:
Nutanix leverages a clustering ML algorithm along with a distributed set of degraded node monitors to identify the degraded node. Once the degraded node is flagged, an alert is generated and the leadership and critical services will not be hosted on that node.

Notice that the action is “an alert is generated,” not that it “fixes the IT issue itself using AI and ML.” I’m sure they want to get there, but they’re not quite there yet.

Nutanix’s Enterprise Cloud OS manages customers’ private and public clouds to make it feel like one-single cloud.

Be careful here: “feel like one-single cloud” is a bit of hyperbole. That was previously incorrectly interpreted by one other poster as the making of deployments on different clouds the same. I corrected that misconception back with this post: https://discussion.fool.com/what-nutanix-really-does-33158672.as…

Nutanix today is has Beam. That’s mostly a dashboard to help your monitor and manage your various clouds via a single tool in a “single pane of glass” type interface

Phoolio18 writes: Nutanix allows its customers to avoid lock-in with a specific cloud provider.

Again, we need to be careful not to take the marketing hype too far. This claim is usually associated with the Frame offering (a recent company acquisition, btw). It’s currently limited to deploying VMs for desktops, not general cloud applications, which still to this day have to be written in mostly cloud-specific manners. If the holy grail is cloud agnostic applications, there’s no-one with a solution today.

Will that be Nutanix? Sure seems like they want to head there, but it’s possible that the so-called serverless technologies will end up making that achievement close to moot.

I’m still bullish on NTNX, but I think we need to be careful about extending their marketing to what we want the company to be.

14 Likes

The Public clouds are not going to co-opt the market for multi-cloud. Want proof? VMWare is partnered with AWS, Nutanix is partnered with Google. Azure partners with no one because Microsoft is already in practically every enterprise so they do not need a partner.

There are fascinating articles on how the Nutanix partnership may be the thing that pushes Google back from a third horse in a two horse race, back into a three horse race, and doing so by not being a “me too” product.

The Nutanix and Google deal also includes Nutanix’s Sherlock (still in development) that pushes the data center, from the center, to the cloud, and then to the edge of every IoT, combining with Google’s best in class machine learning.

I was going to do a more in detail post, itching to do so, but I gotta get our roller skiing. The above should be enough for those interested to start looking themselves, and to make the point clear that the cloud titans will not be co-opting the market anymore than Amazon co-opted Twilio when Amazon was giving away for practically free last year its own contact center software. Remember that bit of panic?

As for VMWare, it is a duopoly. And the latest and greatest marketshare numbers show that Nutanix is #1 in marketshare despite the fact VMWare has an enormous existing customer base to sell into. It is as if Apple were the equal to Microsoft in the PC era and the two battled it out. That is what we have going on here.

Finally, this Xi thing is a risk. It is an entrepreneurial risk. However, if successful, we are talking billions of dollars and today’s share price will be a pittance. I do not think I am making the case to big to say that Xi, and then Sherlock (but lets not get ahead of ourselves) makes Nutanix into a large enterprise software titan, the equivalent of say VMWare (when younger but dominant) or say a Salesforce about 5 to 8 years ago.

That is, if successful of course. If not successful there is still HCI and much growth there but I do not really know. I do know a successful Xi will be bigger than anyone is anticipating, and that Sherlock (that few are talking about) literally makes the world the data center, all controlled with one button on one interface.

At least that is the vision. What I find telling is how the press talks about it. You can read a lot into these things.

Look at some of the articles. The latest, “IBM challenges Nutanix in multi-cloud…”. Not challenged VMWare, or Cisco, or AWS, or Azure, or simply IBM offers its multi-cloud product. Nutanix was specifically referenced, and the only company referenced as the comparator.

With the Google/Nutanix partnership, I am seeing more articles that make it sound like Nutanix is in the driver’s seat, and Google needs Nutanix more than the other way around. In fact, one article I am reading that discusses that this Nutanix partnership may be key to moving Google back into the horse race (As discussed above) concludes with, “it depends on how committed Nutanix will be to the partnership…but Nutanix has a lot of incentive to be committed.”

These articles discuss Nutanix as the company moving things, not Google, not AWS, not VMWare.

On NPI there are at least two threads I posted since last night discussing Nutanix (I think I linked to one here). But there is another one on Nutanix’s acquisition of Frame.

VMWare is having a tizzy because Nutanix bought Frame! Why? Well, about this time last year VMWare incorporated Frame as their video desktop software. A partnership that Forbes hailed as very telling as to the value Frame brings, and that VMWare would not build in-house. I know that AutoCad uses Frame and they say that THERE IS PRESENTLY NOTHING ELSE THAT CAN DO WHAT FRAME DOES. Thus why VMWare partnered with them.

Well guess what? Nutanix just bought Frame out from under VMWare! Frame probably refused to be sold to VMWare, which is a larger part of the Dell conglomerate, but Nutanix is still basically a large start-up and an excellent environment for the Frame team to do what they do, and with stock that has potential for large appreciation (relative to VMWare).

Now VMWare will be paying a toll to Nutanix with every Frame instance run through VMWare system.

This is both scary and fascinating. I think I am giving the flavor here that there is risk, but this Xi product (and all its supporting pieces and vision) if rule breaking anyways (revolutionary? I do not understand it well enough to say so) will be bigger than anyone thinks if successful, definitely changing the database landscape, and then they get to Sherlock, all in the background of HCI itself just starting to hit the mainstream, chasm crossing part of the S curve.

A tad bit of enthusiasm along with anxiety. That is what it is like to be an entrepreneur, and that is what Nutanix still is, and that is what investing in Nutanix at this point is as the stock has collapsed by 40% or so, much further than the general market.

Now I have to get out roller skiing. Not something you see everyday down here in the south! I get lots of wide eyes of wonder staring upon me, but all friendly.

You can check out my OMG Frame thread on NPI. It is a short thread, but I think much of it is covered here in the context that I found useful.

Tinker

44 Likes

Thank you Tinker, as always, great insight.

Very interesting situation on Frame and also Google / Nutanix.

You certainly reduced my concern quite a bit about the two points I raised. Appreciate your sharing these thoughts.

The Public clouds are not going to co-opt the market for multi-cloud.

Um, the “multi-cloud” is nothing but a number of different clouds, so that statement doesn’t mean anything to me.

Want proof? VMWare is partnered with AWS, Nutanix is partnered with Google. Azure partners with no one because Microsoft is already in practically every enterprise so they do not need a partner.

OK, so I guess you’re saying that public clouds won’t kill private clouds off completely. But, taking a historical perspective, we’ve gone from data centers to public clouds, and now the remaining data centers are moving to private clouds (hosted by the enterprise using them themselves), essentially filling in a solution for the people that didn’t want to trust their data to a third party in the first place, or maybe think they can run their own servers more cheaply.

But, the public clouds are here to stay, and if you weren’t invested in Amazon nor Microsoft over the past half decade or so, you missed most of that easy profit.

…the cloud titans will not be co-opting the market anymore than Amazon co-opted Twilio when Amazon was giving away for practically free last year its own contact center software.

Apples and grapefruit here. Twilio is but a feature/service of AWS (I think it was Amazon Global SMS services that had Twilio investors concerned a while back).

The real issue for Nutanix is how big is the Private Cloud market? Nutanix management probably understands this, which is why so many of their new products involve hybrid solutions (public and private clouds), and try to abstract the cloud provider out from the user. More on this later.

With the Google/Nutanix partnership, I am seeing more articles that make it sound like Nutanix is in the driver’s seat, and Google needs Nutanix more than the other way around.

For reasons I attribute to Google’s really bad marketing (amazing now a company that makes most of its money off of advertising is so bad at doing advertising itself!), Google has mostly missed out on enterprise adoption of cloud services (which have gone mostly to Amazon and Microsoft). There’s a chance that Nutanix’s products will help some enterprises move services to Google since it’ll be easier. That’s certainly possible, but I think it’s a long shot that it’ll make a significant difference in Google Cloud adoption.

Nutanix just bought Frame out from under VMWare! Frame probably refused to be sold to VMWare…

More likely is that VMWare wasn’t prepared to buy out a competitor. This article (https://www.crn.com/news/data-center/300107541/vmware-coo-sl… ) has a great take on the acquisition and telling response from VMWare’s COO:
The gloves were off on Friday as one of VMware’s top executives knocked Nutanix’s acquisition of desktop-as-a-service (DaaS) specialist Frame, saying the move proves that Nutanix is “copycatting” VMware…“So you got this industry gorilla taking pot shots at you. Nutanix should be like, ‘Hey, they see me!’ It validates their approach,”

OK, now back to Nutanix’s progress towards abstracting out cloud providers from users. This is where Beam comes into play, helping companies move apps/data between clouds, identifying lowest cost providers given their specific workloads, and it’s where their Xi product is heading, which has a lot of hype, but today only does private cloud backup to the public cloud (and restore, too, of course).

But, perhaps the real question is whether the so-called “serverless” architecture, which provide APIs that are super-easy to use and almost completely hide away from the application writer any need to know anything about servers, server processes, virtual machines, scaling up or down, etc. Amazon pioneered this with Lambda, which is just getting more and more adoption today. Nutanix definitely understands that perhaps the biggest appeal of public clouds is ease of use - not cost. It’s trivial to setup an Amazon account and start making calls.

While HCI makes setting up in-house servers easy, it’s going to be hard for Nutanix to beat AWS’s Lambda literal no setup. And if you’re at all worried about scaling up and down, for in-house HCI machines, easy as it is, you’re still buying hardware for peak loads. That’s crazy for things like retail where your peak loads on 10 days are many times the load on the other 345 days.

Note that I’m still bullish on NTNX, as they seem to be moving quickly on many fronts, haven’t yet stumbled technically or quality (that we hear of). Their NPS ratings are high. So, while I have some macro-concerns, there’s still plenty of runway for them in my view.

15 Likes

Smorgasbord, I stand by my post.

Lambda, for example, is a serverless environment, which means it does basically what Stitch does for MongoDB or Pivotal does for developers. It abstracts away the infrastructure.

You are not going to be running your SAP, or large databases on AWS for multiple reasons, not least of which is cost and lock in, but also moving petabytes of data is still a real world physical truck trip. Public clouds are not going anywhere, and neither are private clouds. Both are in fact growing. I would wager that private clouds are growing faster than public clouds at this point in time as not only the cost-effectiveness of private clouds improve for various reasons (not least of which is HCI) but also out of practicality for movement of data, need for regionality, and security.

As such, it is a hybrid world. Nutanix enables the enterprise private cloud. The next big step is doing what most companies are starting to do, use the best resource, at the best time, for the project at hand. Nutanix is abstracting the entire process away (such as Lambda might do - although I am sure there are multiple differences) so that enterprises do not get locked in, do not get stuck with higher costs than necessary, do not have regionality issues, and can feel comfortable with their security.

From this point there will be even bigger changes with a world of IoT. Sherlock (also partnered with Google) is stretchching this abstraction beyond even the cloud so that you can now run your applications on Nutanix at the edge. And as you run more things on the edge (that will require an operating system as does the center), the center will also grow.

Unless you believe that the future is AWS and Lambda, and Lambda will consume the market, I am not real clear of what your point is? Very few if any major enterprises are gonna run their ship just in AWS, instead they will maintain a hybrid environment, and Nutanix is creating the OS for such an environment (assuming it works).

Not to refute your point, as I appreciate your viewpoint, but I believe your premise is mistaken and pointed to some hypothetical world that will not exist where all material loads run on AWS or its equivalent. That will clearly not be the case. Further, AI itself makes that even less likely as the data hordes will be so large that the internet itself will not have the capacity to transmit it all. As such data will need for the sake of functionality to remain local.

True, although Nutanix expects to run many AI loads, the Pure methodology of modern DAS may be very popular running AI loads as well. Such DAS architectures do not require HCI. But then again, less than 20% of critical applications run under HCI today, with than number growing. So the fact that there may be more modern DAS for AI loads really does not impact the Nutanix model either. Nutanix will never be 100% of the datacenter. It’s goal is to incrementally grow its share of what is run with HCI and expand the scope of what can be run from the private data center, to the private cloud, to the hybrid public/private cloud, to the edge of IoT.

To conclude where I started from, Lambda w AWS is but one option in this universe of where loads can run, and hardly the dominant option that will materially suck up so much of the market that HCI w Nutanix and VMWare, Cisco (5.5% marketshare is #3, HPE w 4.5% marketshare is #4) will disappear or in fact have any material impact on their businesses at all.

Tinker

14 Likes

Nuclio serverless platform is autoscalable with continuous integration and versioning for the code. “Essentially, it’s a packaged version of a cloud-native solution,” he stated. It also happens to be faster than bare metal and 100x faster than Amazon Lambda, according to Haviv.

“We’re doing 400,000 events per second on a single process — they do about 2,000,” he said, adding that most open-source projects hover near the 2,000 mark as well.

While serverless and database as a service are clear favorites of developers, some information technology organizations stubbornly insist on doing things the hard way, Haviv explained. “They like to build stuff. They want to take the Nutanix, and take 100 services on top of it, and it will take them two years to integrate,” he said.

The pace of digital transformation doesn’t forgive such plodding. “By that time, the business already moved somewhere else,” Haviv concluded.

https://siliconangle.com/2018/01/08/doting-developers-could-…

This is what Smorgasbord is talking about in reference to Nutanix and AWS. This company Nuclio is saying it does serverless 100x faster than lambda. But one thing not discussed is that serverless is stateless. This means that the serverless function does not retain the data or result that the action creates so that each time it is called it is as dumb as the first time and takes in data from elsewhere. It is the data that is stateful and thereby remembers things.

It is beyond me as a non-developer, much less an expert developer to understand such things. But there is a reason why containers and Kubernetes exist and are prospering despite the rhetoric of Nuclio. This company, btw, seems like it would be a great acquisition for someone. But I digress.

Of note is the article goes into w/in the context of this article you use a “NoSQL database”, which is what Stitch does with MongoDB.

It does take some context to pierce through articles like this. The market that Nuclio is discussing along with Lambda is expected to be $7.2 billion within the next 5 years or so. That is far smaller than the hyperconverged or multi-cloud converged markets. Nevertheless, a competitor to some of the container/Kubernetes business as well.

I am sure it is not rocket science, but this is for the real techies to sort out.

Tinker

5 Likes

You are not going to be running your SAP, or large databases on AWS for multiple reasons, not least of which is cost and lock in, but also moving petabytes of data is still a real world physical truck trip.

Amazon says otherwise: thousands of enterprise customers are running SAP workloads on AWS – with hundreds of those workloads in production. (https://www.businesswire.com/news/home/20180606005714/en/SAP… ) and for example: “With our previous, on-premises financial reporting system, uptimes were about 85 percent. Using AWS, we get uptimes of 99.99 percent for SAP S/4 HANA. Ultimately, AWS was more cost-competitive than the other cloud-service providers we evaluated. Plus, with AWS Availability Zones present in the Asia Pacific (Sydney) Region, we could keep our data securely stored and backed up in Australia. The scalability of the SAP S/4 HANA platform on AWS enables us to bring the benefits of SAP to other business units without the cost of implementing a new system,” said Craig Howard, General Manager of Finance, Mitsui Coal Holdings.

Amazon recently enabled large memory EC2 instances (from 4TB to 12TB, with 24TB promised next year) specifically to handle workloads like SAP HANA (https://www.theregister.co.uk/2018/09/28/aws_ec2_12tb_memory… ).

I would wager that private clouds are growing faster than public clouds at this point in time…

I have no feel for whether that’s true or not, but I did come across this which basically backs up your wager: https://www.suse.com/c/news/new-study-shows-cloud-adoption-b…

Cloud technology has matured to the extent that many businesses are adopting a cloud-first or even a cloud-only strategy. Growth is expected to continue for all types of cloud, especially hybrid (66 percent of respondents) and private cloud (55 percent), with 36 percent seeing public cloud growing.

Do note that the company doing the survey isn’t independent and has skin in the game with regards to containers for cloud deployments (they work with Nutanix, btw). So, they’re not likely to want to show increases in serverless, which many consider a better solution, at least for public cloud deployments where a running container racks up chargers even when not being used.

Unless you believe that the future is AWS and Lambda, and Lambda will consume the market, I am not real clear of what your point is?

My point is that the trend is towards making cloud services easier and easier to use. Nutnatix is about making cloud servers easy to setup on premise. That’s good, but the public cloud still has advantages for many applications.

I believe your premise is mistaken and pointed to some hypothetical world that will not exist where all material loads run on AWS or its equivalent.

I’m hard pressed to find where I said this, Tinker.

To conclude where I started from, Lambda w AWS is but one option in this universe of where loads can run, and hardly the dominant option that will materially suck up so much of the market that HCI w Nutanix and VMWare, Cisco (5.5% marketshare is #3, HPE w 4.5% marketshare is #4) will disappear or in fact have any material impact on their businesses at all.

First of all, much of the impact has already happened. The marketshare you’re touting is what’s left over after so many applications moved from on premise data centers to the public clouds. It’s not like it’s dogs fighting over table scraps, but there’s no question than many applications are prototyped and then eventually rolled out on a public cloud. For years, VMWare fought the trend, and finally gave in last year with their “VMware Cloud on AWS” (https://www.forbes.com/sites/janakirammsv/2017/08/29/what-yo… ).

VMware’s decision to embrace AWS as its public cloud surprised industry analysts as well as customers. After all, AWS was an arch rival that continued to threaten the core business model of virtualization and private cloud.

This was after VMWare decided to roll out its own public cloud. Yeah, you never heard of it, that’s how successful VMWare was with vCloud Air.

VMware’s customers were already making a move to AWS eroding the revenue opportunity around vSphere and vCenter. On the other hand, Amazon started to build tools for vSphere to lure enterprises to its public cloud. With the IT spend moving to public cloud, it would be extremely challenging for VMware to convince customers to continue investing in its flagship products. The partnership with AWS eases the tension by enabling customers to use VMware products while still moving to the public cloud.

And still more on VMWare’s troubles: Gartner research director Michael Warrilow told The Register he thinks that’s a weakness, by asking “Does your hypervisor matter in the public cloud? If you can get public cloud in house does the hypervisor matter any more?” The Register’s virtualization desk thinks lots of VMware-on-Microsoft users will ask themselves the same question when they next come to refresh either VMware licences or servers. So will those contemplating hyperconverged infrastructure. (https://www.theregister.co.uk/2017/07/11/azure_stack_debut_a… )

But, back to Tinker’s argument, Nutanix could own 50% of the existing marketshare for this year and it wouldn’t impress me. The real question is whether Nutanix can expand its TAM. Right now, they’re simply grabbing the low hanging fruit - probably companies using traditional data centers. With HCI, it’s now a lot easier to setup on premise servers supporting the latest in virtualization, as well as managing on premise data storage and backup much more efficiently and cost effectively than before.

In my mind, one of Nutanix’s biggest competitors is Microsoft. With their “Azure Stack,” people can run Azure on their own on premise servers Think about that. If you use the Azure public cloud today, you can painlessly migrate those applications to run on Azure on your own on premise hardware, whether for compliance, security, or economic reasons. That’s pretty compelling. Where Nutanix shines is making the on premise servers easy to setup and manage. And yes, as that article states, you might use Nutanix to host your on premise servers running Azure Stack.

The Fat Lady hasn’t sung yet, and the opera is literally being written as it’s being played. It’s really hard to identify the eventual winners and losers. Right now, the financial analysis on this board as NTNX as a winner based on past results. But, as the technology changes the market, the financials could also change. As I said before, I’m still bullish on NTNX, but it does warrant vigilance. It’s not a buy and forget holding.

20 Likes

With our previous, on-premises financial reporting system, uptimes were about 85 percent.

This is a curious, but potentially interesting statement.

15% downtime … unless due to lack of need … is really, really terrible. So, are we talking about replacing some really ancient piece of now flaky hardware with anything vaguely modern or what?

But, the more subtle reference here is to “financial reporting system”. Many high transaction volume sites will replicate their databases to a second system and use the second system for reporting so that the heavy, intense read-only activity of reporting doesn’t impact with the transaction commitment of the posting database. Many companies find that moving high transaction volume databases to virtualized environments performs very poorly. The virtualized environment may do just fine for general purpose file system activity or even relatively low transaction volume database use … including for reporting since it is read-only … but high transaction volume usage is poor. This is most conspicuous when the performance of the virtualized system is achieved via a write cache which is satisfactory most of the time, but which fails spectacularly when one has to do something like restore a backup, overwhelming the cache.

1 Like

Our discussions are getting a little dense, and as a talking raccoon/rabbit/triangle shaped monkey/rat and perhaps a few others so aptly stated, “we didn’t have to work out the minutia of the plan.” So I am going to describe the trend first, and what at a high level Nutanix is doing to enhance the trend. That trend is workloads now moving away from public clouds into hybrid environments. Cost is a big reason for this, and one aspect of cost is not just expense of the public cloud, but also the relative cost of in-house data centers falling relative to the public cloud. Nutanix is part of why this is happening, and Nutanix is doing many things to further put the thumb on the side of in-house growing to be more cost-effective, and even more valuable because you can also use your in-house data center to work with public cloud as an ancillary aspect of its IT strategy. I will discuss this in the context of serverless. Serverless actually adds complexity to the architecture, thus even greater need for management of it. Serverless is also limited in what it does. Thereafter I will post a bit from an article that explains this increasing complexity that serverless will add to the in-house data center.

So that was a big paragraph. Indeed, the most cost-effective and utilitarian the public cloud can become the less of the in-house data center you need. And that would be a net negative for Nutanix or VMWare. However, as described above (and perhaps a bit below) the trend is working the other way due largely to economics. So to start without trying to be too boring or esoteric (word of the day! :wink: ).

SO below ******************** Bifurcation makes things easier to read I think *******************

I have tried to find articles showing imminent danger to a world where serverless computing rules. I have not found any. I figured that if this were a mortal threat there would be much written on this in that context. But nothing really other than a few chide remarks here and there. Instead what seems clear is that serverless computing adds yet another complexity to the IT data center by creating yet another level of abstraction, and beyond this multiplying many fold the number of containers created to run all the stateless serverless apps. This dramatically increases productivity of developers (thus justifying the value of serverless mind you, but it will not necessarily make managing the data center easier). As the article I will link to below indicates - thank god for Moore’s Law because all these abstractions add yet another computing layer that servers need to run through.

I brought up before that serverless runs only stateless apps. This means apps that do not contain data or the resultant information from the application. Data is fed to the stateless container where the serverless instance runs, and then the resultant output is sent somewhere else. The serverless app therefore remains the same and with no memory of anything that it has done. {NOTE: and this may be helpful to understand that Nutanix has throught this through, Nutanix’s CEO, in a very interesting discussion, laid out the context that stateful loads will still run through containers, even as micro-services will run within micro-containers. Serverless is not something Pandey has not thought through.}

Data has to come from somewhere and go somewhere to reach the stateless serverless app. They will be managed within containers as they often are today. Thus there will be containers for data and microcontainers for serverless. And as we also know data is growing geometrically and it is often more efficient to keep functions close to the data that fees and derives from the function. Thus in-house where the bulk of data will remain. As such, unless the trend to moving loads back from public clouds reverses the need for a solution like Nutanix will only grow as the complexity increases for now running everything that now exists but now the serverless abstraction on top.

Public clouds do lots of work, private clouds do even more work. The trend away from public clouds is for many reasons, but one such reason is quite simple, COST. It is becoming more economical to run things in-house (and this is a decade old trend). The solution to this, to do what is best for each load (serverless or not)is to optimize where, what, and when things run. I.e. multiple cloud. Yes BEAM is an important feature of this but not the only one. BEAM offers a tool to optimize where, what, and when something runs.

ERA, as an example, if it works as described, further makes running databases easier in-house. We already know huge amounts of data in the public cloud becomes problematic. Use ERA to further ease the management of databases (right now Oracle and Postgres, but will be expanding) and you increase the relative value of running in-house, but also allowing BEAM who, what, where analysis to take place as well.

Nutanix is doing multiple things to accelerate the relative cost advantage of running on premise data centers by making things easier to do so, and by INCREASING THE UTILITY TO RUN THINGS ON-PREMISE BY CREATING THE ANCILLARY OPTION TO SHARE LOADS WITH THE PUBLIC CLOUD WHEN UTILITY AND EFFICIENCY CALLS FOR IT. Thus, as things presently sit, and trends presently are moving, and as Nutanix is trying to further this trend, running things on-premise, for many enterprises, will be better to run on-premise as first case on the private cloud and off-premise in the public cloud as the ancillary.


Again, if public cloud change this trend, that may be a negative trend. BUT WAIT! One has to look at what this partnership with Google is about, not just with Xi running multiple clouds. The Edge, Sherlock, is not taken into account.

Both Google and Nutanix are also thinking of IoT and the edge. Both understand that edge IoT may be better served in many circumstances by edge processing of the IoT data as well. This is what Sherlock is about. Sherlock allows IoT to run through Nutanix on the edge, and then back to Google for AI that requires centralized processing and then back to the on-premise data center where AI or other apps are run there.

Nowhere in this public discussion above about serverless does the need to manage on-premise, manage public cloud, and manage the edge disappear. It just gets more complicated and complication creates more need for better management tools to reduce complexity.


So how is a serverless environment in the data center run?

So with a serverless framework, where does the actual infrastructure come into the picture? It’s still there, just under multiple layers of abstraction. Talk about software-defined computing. With this latest evolution into serverless computing, we now have perhaps several million lines of system- and platform-defining code between application code and hardware. It’s a good thing Moore’s Law hasn’t totally quit on us. {Note: Software defined storage and HCI are different, with HCI being much more efficient and easier to manage for most use cases, thus this is referencing HCI}.

Let’s look briefly at all the sophisticated abstraction layers that we could stand up to build our own private cloud, serverless environment:

https://searchitoperations.techtarget.com/opinion/A-serverle…

At the bottom, there is going to be a physical server. Of course, we might need to revise our common concept of the physical box now that servers can be dynamically provisioned aggregations of component resources (e.g., cores, disks, memory, interfaces).

We’d likely deploy a layer of virtualization on top of physical servers. A hypervisor will cluster host physical servers and, in turn, serve out ephemeral VMs. We might also add further cloud provisioning and automation services, such as those found in OpenStack.

Into that virtual, cloudy environment we then deploy a container host cluster to provide container platform services, such as Docker, Kubernetes or OpenShift.

Then we install our containerized serverless computing platform to provide application lambda services, such as Platform9’s Fission.io.

Finally, we can create and deploy our microservice application on this serverless architecture by submitting functional code bits that will run on top of all of those layers.

For example, a microservice function can be written in JavaScript and declared to the lambda service. At that point, it’s mapped to some event trigger or API endpoint. When triggered, the lambda service will execute the function in its own container. That container will be running within the container host cluster, which we can spread across two or more VMs within a hypervisor clustering of physical servers.


As such, although I do see the merit of serverless, I am not sure how serverless removes the need for HCI or another system to manage all this complexity. I guess that is the question. Above is my understanding of the situation. Is there something about serverless that removes the need for HCI or another similar (if yet unknown) software management offering?

And please do correct me if I am wrong in something above as I am shocked myself at what I seem to know when I start to talk about it. Nothing above actually means I actually know what any of it means ;). Well not much of it anyways, so I always appreciate the critique.

But this one simple question I think is the gist. What does serverless to remove the need for HCI and Nutanix’s vision of the multi-cloud and the IoT world on the edge with Nutanix running processes on the edge as well?

Tinker

10 Likes

That trend is workloads now moving away from public clouds into hybrid environments.

I disagree with this. Hybrid cloud deployments are growing, but that’s due to a switch from data centers to private clouds, not from public clouds to private clouds. Also, be careful with web searches on this topic, as mostly it’s the private/hybrid cloud providers touting their wares.

Serverless actually adds complexity to the architecture, thus even greater need for management of it…Thereafter I will post a bit from an article that explains this increasing complexity that serverless will add to the in-house data center.

So, let’s be clear here. Serverless reduces complexity as it completely eliminates the needs to manage servers. Now, if you want to insist on running a hybrid environment, then you’ve got a cloud interface that has no management with an in-house cloud that requires you to completely manage it. So, while complex, it’s not correct to put that complexity on the serverless aspect - it’s on the on-premise hosting aspect.

Nutanix is doing multiple things to accelerate the relative cost advantage of running on premise data centers by making things easier to do so, and by INCREASING THE UTILITY TO RUN THINGS ON-PREMISE BY CREATING THE ANCILLARY OPTION TO SHARE LOADS WITH THE PUBLIC CLOUD WHEN UTILITY AND EFFICIENCY CALLS FOR IT.

Putting something in ALL CAPS doesn’t make it real, Tinker. While this is Nutanix’s vision, THEY ARE NOT THERE TODAY. Literally, the ONLY thing they do today with this, via Xi, is to backup your private cloud to the public cloud, and then restore from the public cloud to your private cloud.

I have tried to find articles showing imminent danger to a world where serverless computing rules. I have not found any.

Here’s one: https://thenewstack.io/serverless-impacts-on-business-proces…
Some snippets:
Serverless technologies are gaining traction in both enterprise and startup environments, at a pace much faster than seen with containers … serverless platforms… now reaching parity with virtual machine usage … Cloudability’s State of the Cloud 2018 report, analyzing the IT spend of 1,500 organizations in 2017 shows a serverless quarter-over-quarter growth rate of 667 percent … hile cost reduction is definitely a benefit of serverless, it is much more about the velocity of development. “If teams are just writing code, they are only worried about the quality of the functions, they don’t have to worry about running on Linux version this or that. Those problems become someone else’s problems, team members can focus on writing code in Node, or Python, or any serverless language today, so everyone benefits.

what seems clear is that serverless computing adds yet another complexity to the IT data center by creating yet another level of abstraction, and beyond this multiplying many fold the number of containers created to run all the stateless serverless apps.

This is silly, Tinker. What you’re focused on is an on-premise cloud attempting to figure out how to support serverless architectures. That article you linked was exactly that. What it actually shows is that serverless is a real advantage for public clouds. AGAIN, serverless usage results in NO server setup, NO server management, NO virtualization nor hypervisor usage, and NO containers. If it’s hard to replicate those advantages with on-premise machines and spinning containers up and down, well, all the more reason to go serverless via the public cloud, where you don’t worry about any of that and don’t pay when you’re not actually using the service.

I brought up before that serverless runs only stateless apps.

And here again you continue to ignore my references to MongoDB’s Stitch product, which literally ADDS STATE TO SERVERLESS APIS! (two can play at this all caps game) See https://www.zdnet.com/article/mongodb-stitch-serverless-comp… MongoDB’s new Stitch service is another approach to delivering serverless compute, but with one big difference: it’s got state.

Nowhere in this public discussion above about serverless does the need to manage on-premise, manage public cloud, and manage the edge disappear.

That statement couldn’t be more wrong. Public cloud serverless makes those management tasks disappear for callers. Those are all Amazon’s problems now.

So, what does this mean for Nutanix? Again, I think serverless makes it harder for enterprises to move applications from public clouds to Nutanix set up and managed on premise clouds. While HCI gets you software defined computing and storage and networking, it doesn’t enable them to support serverless APIs. What Tinker inadvertently points out is that doing so will be hard. And, that’s not good for Nutanix.

Maybe this is what Nutanix’s next product should be: serverless API support for HCI servers.


Back to bottom lines, I differ with Tinker on the trends here. HCI is growing, but that’s mostly replacing old-fashioned data centers, mostly not replacing public cloud usage. Serverless usage is indeed growing, and that’s definitely a public cloud advantage for now, with Tinker corroborating that supporting that on-premise is hard today. And while Nutanix’s vision is a world where applications can be moved from public to private clouds with ease, their reality is not that today. Nutanix’s Frame hurts VMWare, but VMWare was already hurting from the move from virtualization on private clouds to public clouds.

13 Likes

Below is a sampling of articles. Not all put out by interested industry. Each supports exactly what I stated. This is all that I read about when looking at the subject. Hybrid, hybrid, hybrid. More than 50% looking at hybrid, Hybrid growing faster than private cloud or public cloud…

http://www.datacenterjournal.com/industry-outlook-state-hybr…
https://searchcio.techtarget.com/feature/Data-center-facilit…
https://www.datacenterdynamics.com/opinions/cloud-experts-wa…

So, let’s be clear here. Serverless reduces complexity as it completely eliminates the needs to manage servers. Now, if you want to insist on running a hybrid environment, then you’ve got a cloud interface that has no management with an in-house cloud that requires you to completely manage it. So, while complex, it’s not correct to put that complexity on the serverless aspect - it’s on the on-premise hosting aspect.

I am not sure how you reach this conclusion. If you assume that public cloud will dominate running server less applications for enterprises then yeah, it simplifies as the public cloud runs all the infrastructure.

However, as I laid out in my last post, server less does not remove any of the complexity that currently exists, but instead adds another layer of complexity on top of it, along with multiplying numbers of micro-containers to go along with data containers.

I can find no evidence other than the trend is not that public cloud is slowing in growth, but that companies (in general) are moving more loads back to on-premise data centers relative to what they are moving to the public clouds, and that they are running this on their private clouds, yes, but their real interest is running this hybrid. Hybrid is the leading trend, server less or no server less.

A larger risk to HCI is if something like the Pure vision for the data center became predominant (not likely). Even NTAP has started offering converged offerings (they call it hyper converged, but it is much closer to converged in reality).

It seems incongruent to say that server less and the public cloud is materially affecting Nutanix when Nutanix’s financials are not only still superb but they are actually accelerating.

Absent enterprises moving things to the cloud and abandoning their on-premise data centers (with the exact opposite trend happening) server less, as you agree above, increases complexity of on-premise management of data centers. HCI is the leading solution to manage this complexity and by all accounts does it very well (even if not perfect as little is perfect).

We will have to disagree on this. The literature is not supporting your position as far as I can find, and the industry results are not supporting your position. The HCI market according to IDC grew 75% this page quarter. Same as the quarter prior to that:

https://www.theregister.co.uk/2018/09/27/idc_converged_syste…

Nothing in these numbers demonstrate enterprises moving to the public cloud in a manner that is having even a dent in the HCI market. And the HCI market is not even considered to be “mainstream” as I discussed in a prior post, until around 2021 or 2022 according to IDC I believe, but could also be Gartner. Fewer than 20% of critical work loads currently run on HCI, and this number continues to grow year after year as well. All this despite the growth of the public cloud.

Given this, and I will continue to ponder your posts (which I appreciate) I cannot find anything to support your position. Hypothetically possibly, but there is nothing in the literature nor in the real world performance of both Nutanix and the HCI market itself to support this position.

I will continue to look, but if server less is really a mortal threat to HCI then there will be literature on it. There has been on every other issue I have ever researched regarding such things.

Tinker

3 Likes

However, as I laid out in my last post, server less does not remove any of the complexity that currently exists, but instead adds another layer of complexity on top of it, along with multiplying numbers of micro-containers to go along with data containers.

Tinker, at this point it’s clear that you don’t understand what’s going on. You need to take a moment and actually read what I’m writing.

You are rambling on about something that is not relevant and not real. Almost all serverless usage is via public clouds. THE WHOLE POINT OF SERVERLESS IS TO REMOVE ALL COMPLEXITY FROM CLOUD USERS.

If you want to insist that serverless is complex, realize that’s only if you’re trying to host serverless APIs via your on-premise servers. Also, realize that no-one does that. Serverless is almost always via a public cloud. All you’re doing is proving that on-premise HCI cannot compete complexity-wise with serverless!

The literature is not supporting your position as far as I can find, and the industry results are not supporting your position. The HCI market according to IDC grew 75% this page quarter.

I already provided references, you’re simply not understanding them, or not reading them. Public cloud usage continues to grow. HCI usage continues to grow. What’s shrinking is old-fashioned on-premise servers. There is no evidence of a large scale movement of cloud applications to HCI on-premise applications. I’m sure there are isolated examples, but it’s not the trend.

From the only one of your 3 links that actually works:
We’ve found that hybrid clouds are a great way for our customers to transfer from their legacy IT to a more efficient cloud model.

Get it now? They’re not moving from public clouds to hybrid or to on-premise, they’re moving from their own on-premise non-cloud servers.

There’s no evidence that enterprises are moving en masse from public clouds to private clouds. Public cloud usage is growing (just ask Amazon or Microsoft). Private cloud usage is also growing, but that’s at the expense of old on-premise servers, not at the expense of the public clouds. And finally, serverless usage is also growing, by over 600% just last year alone (as referenced in my previous post).

6 Likes

https://martinfowler.com/articles/serverless.html

Here is a long and detailed and recent article on serverless. It is basically and introductory course one might take in college (well almost). It sets out the pros and cons in great detail. In such great detail that it sets out the cons in those inherent to serverless, thus will never be corrected substantially, and those that are capable of mitigation and where the mitigation efforts presently stand.

As you read through it, much of what I brought up stands true. Serverless is a technology for some things, at present a niche and perhaps forever a niche. But when it is right to do something with it is well worth doing it with serverless.

The article even discusses, as I brought up the use of hybrid (although hybrid can be in the service cloud itself, so it is still third part like AWS giving you the hybrid, but also, as I brought up, the use of serverless without a third-party provider within the on-premise data center. Precisely due to the stateless aspect of serverless, not to mention overcoming some of the limitations that AWS, as an example, self-imposes.

Serverless is a lot like Pivotal Cloud Foundry. Mongo has of course produced its own version. There is no doubt that serverless is the next round of efficiencies in the data center beyond containers (even though it uses containers). But as you read through this very one and detailed article (and frankly I did skip some of the benefits as I assume they are there in spades, but focused on the cons to get a good feel for limitations) serverless is certainly not a give all and end all, and in fact is likely to remain a very useful niche.

I cannot see a major enterprise ceding all this control to one vendor. Serverless does create vendor lock in. Serverless also requires less computing power than typical data center functions (this is by design, as for example AWS limits any such app to run time of no more than 5 minutes, and something like 256 mb of space, along with the app unloading if not constantly in use creating latency issues for infrequently run instances).

It does not sound like serverless is going to replace the containers of the today, although no doubt serverless will be run for that which it is good run on as there are many benefits for those use cases where the cons are not applicable.

In the end, this comes back to the multi-cloud, including the private data center. Running serverless where it is the best use case on a third party, and running it locally where it is the best use case, and combining the to, particularly in data intensive cases.

I really do not see where it changes the Nutanix thesis as serverless comes no where close to replacing what is done in the data center, unless you decide to outsource your entire data center to one cloud vendor (something starts ups might do, but not something mature enterprises are willing to do).

Tinker

1 Like

Gartner predicts that by 2020, 90 percent of organizations will adopt hybrid infrastructure management capabilities. That said, it’ll be important to understand where these types of solution can impact your business and where you should be deploying hybrid.

https://www.datacenterknowledge.com/manage/be-aware-these-5-…

I have gone through many predictive articles from many sources. Most of these sources are vendor neutral. To a “T” HCI and converged technologies is what it is all about starting with 2018. Hybrid is the architectural model of choice. This is particularly true given improvements in HCI and expansion of use cases, which is also opening up new customer segments. Segments such as larger enterprises and the Federal government as we have seen.

Tinker

Nevertheless, a growing range of private-cloud serverless platforms are coming to market, making it possible to implement entirely on-premises serverless environments and even private-public serverless hybrids.

Deploying serverless capabilities across hybrid clouds need not dilute the service levels that users have come to expect from similar capabilities deployed on-premises in enterprise environments. The private-cloud-grade experience can remain intact even as data, models, code, workloads, and other application artifacts are moved back and forth in complex, hybrid multi-cloud environments. This robust hybrid-cloud experience is the essence of what Wikibon calls the “True Private Cloud.”

https://wikibon.com/evaluating-serverless-frameworks-true-pr…

No, what I was talking about is indeed what is being bandied about out there. There is a lot of use of the public cloud but enterprises are moving many functions back on-premise, and this includes using server less in their private on-premise cloud and in hybrid, which is basically the public and private clouds working together.

As such, it does not appear that server less is anymore of a mortal threat to Nutanix than is Cisco or HPE and even VMWare (who is admittedly a very tough competitor and #2 to Nutanix - but oh so very close).

Tinker

2 Likes

Here is a long and detailed and recent article on serverless. … As you read through it, much of what I brought up stands true.

No, it doesn’t.

Please show us where Mr. Fowler confirms what you said, which was: Serverless actually adds complexity to the architecture, thus even greater need for management of it. He doesn’t, because your repeated statements to this effect are false.

Matter of fact, Fowler sums up serverless’s advantages thusly: Serverless architectures may benefit from significantly reduced operational cost, complexity, and engineering lead time

To be extremely clear, there is NOTHING in Mr. Fowler’s article about implementing a serverless architecture on a private cloud. It is, as I have been forced by Tinker to repetitively state, a public cloud service. Tinker is simply wrong on that.

The article even discusses, as I brought up the use of hybrid (although hybrid can be in the service cloud itself, so it is still third part like AWS giving you the hybrid, but also, as I brought up, the use of serverless without a third-party provider within the on-premise data center.

No, it doesn’t. Tinker keeps insisting on talking about implemention of serverless on private or hybrid clouds as being complex. But, AGAIN, those don’t exist, as Fowler himself states: Also some people use PaaS platforms like Cloud Foundry to provide a common development experience across a hybrid public and private cloud; at time of writing there isn’t a FaaS equivalent as mature as this.

Remember serverless is one example of FaaS (Function as a Service). Tinker, please put this false and misleading argument to bed.

Fowler’s remaining use of the word “hybrid” is not in the same context as Tinker’s. Tinker refers to a “hybrid cloud” whereas Fowler talks about “hybrid architectures” in which different programming paradigms are used to support a given application.

Again, there is nothing in Fowler’s article to support Tinker’s stubborn statements on serverless being complex. Fowler says what I’ve been saying, that’s its LESS COMPLEX for programmers.

Serverless is a lot like Pivotal Cloud Foundry.

Tinker is a bit mixed up here, too. Pivotal Cloud Foundry includes a product they call Pivotal Function Service (PFS), which is an example of a serverless API. There is also a Pivotal Container Service (PCS), which is an API for use within Containers like Kubernetes. So, enterprises have a choice of how you want to use Pivotal. But, it’s incorrect to characterize serverless as “a lot like” PFS.

Serverless also requires less computing power than typical data center functions (this is by design, as for example AWS limits any such app to run time of no more than 5 minutes, and something like 256 mb of space, along with the app unloading if not constantly in use creating latency issues for infrequently run instances).

This isn’t the problem Tinker apparently thinks it is. Serverless computing is designed to be used with an application architecture based on what are called micro-services. The idea is that instead of having antiquated, giant, monolithic applications, solutions are structured in small pieces, each providing a small service (hence, “micro-services”). This is as big a shift in SaaS programming as Object-Orientation was to desktop apps a few decades ago. We can get into micro-service architecture if anyone wants, but it’s not relevant, Tinker attempting to say that a micro-service architecture doesn’t support monolithic applications is like saying streaming videos don’t support physical movie theaters.

It does not sound like serverless is going to replace the containers of the today

In some cases yes, in some cases no. Here’s one case study: https://serverless.com/blog/why-we-switched-from-docker-to-s…
Essentially, this company switched from containers to serverless, and gained high availability, resiliency, and lower costs. Since integration, we’ve taken a serverless first approach; all new services are built in a serverless fashion unless there is an obvious reason not to go serverless. This has helped us dramatically shorten our release cycles, which, as a startup and a SaaS provider, has been hugely beneficial.

serverless comes no where close to replacing what is done in the data center

This is tiring, Tinker. Stop creating strawmen to tear down. No-one has said serverless is replacing data centers or what is done in data centers. Stop making crap up.

I really do not see where it changes the Nutanix thesis

OK, let’s drop the technology and talk trends. Your Nutanix thesis is that enterprises are moving applications from public clouds to private clouds, and so since there are lots of public cloud applications and Nutanix’s HCI helps enterprises setup and manage private clouds, they’ll do well.

My counter argument is that enterprises are not moving applications from public clouds to private clouds en masse. Sure, there are isolated examples, but the real trend supporting Nutanix is movement from old-fashioned, non-cloud data centers (perhaps old J2EE servers) to private clouds. Of course, the mega-trend over the past half decade has been the movement from those legacy data centers to the public cloud, but by now it’s too late to invest in that trend.

So, the question becomes what the TAM for private clouds really is. If it’s many public cloud applications moving to private, then Tinker is right and Nutanix’s TAM is large since today public cloud usage is large. However, if it’s mostly legacy data centers being replaced by private clouds, then the TAM is much smaller and limited to the number of remaining legacy data centers. This is my concern. As the legacy data centers disappear, so will the demand for new private cloud infrastructures. As the public clouds get better and better, it’ll be harder and harder for enterprises to justify private cloud installations.

Note that the rationale given by Nutanix for moving from public cloud to private cloud is mostly based around cost. Yeah, for some enterprises it’s compliance or security, but those aren’t the majority. And, cost is where public cloud features like serverless can help. As the case study I cited earlier shows, going from containers to serverless can greatly reduce public cloud service costs, and enable rapid scalability. That’s all I was saying, that the public clouds are responding to market needs, yet Tinker somehow blows that up into some serverless taking over the world argument, which it never was. That’s just another false strawman invented by him.

Private clouds do have a tough time with scalability, since scaling up means buying additional hardware. This gets us to what may be considered the holy grail of hybrid cloud computing - where you run something locally most of the time, but if demand increases the additional workload moves seamlessly to a public cloud. As I’ve said earlier, this is a nice vision, but Nutanix isn’t there yet.

Back to today’s reality, I think Nutanix’s management is aware of the limited TAM for new private clouds. That’s why they came out with Beam, a tool that helps enterprises understand and manage how they’re using public cloud services. A side effect is that it’ll show enterprises how expensive the public cloud can be, and maybe Nutanix thinks they can use that to sell the customer on Nutanix private clouds. But, migration from public cloud to private isn’t simple.

Finally, Tinker brought up vendor lock-in as a problem for serverless usage. But, while it’s true that using a serverless architecture can lock you into that vendor, and similarly for most of AWS and Azure as well, the same is just as true for almost everything that Nutanix provides. Once you’ve committed to Nutanix’s HCI for setting up server compute, distributed storage, and networking, you’re locked in. Same for using Nutanix’s Acropolis Hypervisor, which is another form of lock-in. Enterprises may be asking themselves whether they want to be locked into Amazon or Nutanix - which would you choose?

I remain skeptical of a Nutanix investment thesis that involves a shift of usage from public clouds to private clouds. Even if that were to start happening, I think Amazon and Microsoft are too smart to let it continue. They’ll adjust pricing and features as necessary. In the meantime, they both continue to make it easier to deploy applications on their public clouds than on enterprise private clouds. Serverless is but one example of that ease of use, which has a side benefit of reduced costs in many instances. Scalability remains an area where the public cloud shines over the private cloud.

So, I have a position in NTNX, but I’m watching it closely. I suspect that like most Saul positions, it’ll be abandoned in not that long of a time as its high growth won’t prove to be sustainable over many years.

15 Likes

Excuse me Smorgasbord, but your hyper technicality without looking at the practicalities of what I am discussing is getting old. You even get to “is a lot like Pivotal” and you say NAY!

Really? The whole purpose of serverless is to hide the infrastructure and just code the substance of the app itself.

The whole purpose of Pivotal is to hide the infrastructure and allow the developer to just code the substance.

Yes, they are not the same TECHNICALLY and there are deployment differences, but practically they are trying to achieve the same dang thing! Increase developer efficiency by abstracting away the infrastructure.

I know very well the limitations of serverless as compared to what Pivotal allows (which has few limitations). I really did not need to get into those technical details to understand that from a practical perspective they are attempting to achieve (with serverless being much more limited) THE SAME DANG THING!

And yes, my capitalization does make it true. Because yes, I am correct.

Hyper technicality without examining what it practically means is self-defeating in investments.

This said, I do appreciate the latter part of your post talking about practicality of what it means for an investment and we can all take the information and make up our own minds in regard.

My whole point was to examine serverless to see if it was a mortal threat to Nutanix.

Tinker

3 Likes