Toss out the Cloud Stocks?

“I think he’s (Cramer) always a day late and a dollar short.”-roach1984

WRONG!

Naidorf,
Yes, it usually starts around 2 and crests around 3pm… etc.

Thank you for that post. I found it very informative.

I’m expecting the selling to continue, but have had difficulty trying to formulate a theory around when we might expect to see a reversal. I do anticipate a reversal as I don’t see any indicators of a recession and long-term bear market - yet; though we all know one will come.

I was sort of anticipating a slow down in the sell-off over the next week and then just idling until earnings reports started coming in next month. I had no knowledge of way hedge funds operate with respect to withdrawals. I’ll check out your post on HF trades, I missed it earlier.

I’ve not really much spare cash and I don’t try to time the market as I think it’s a fools gambit, but it’s also hard to resist the fact that many of these premier companies have gone on sale. Just what to sell in order to raise cash as everything has taken a dive and when to actually buy?

This whole investing game is quite a challenge . . .

My point is the datacenter build-out and cloud/IT-spending booms are susceptible to a slowing economy as with many other sectors, and some of the estimates of 40-50+% growth extrapolated out for years is a little bit naive in my opinion.

2 Likes

I think growth of the cloud is moderately past the earliest part of the S-shaped curve, so there’s plenty of fast growth to come. It could get stalled here and there by macro considerations, but that delay would only be temporary, just as it was with the Internet.

5 Likes

I think growth of the cloud is moderately past the earliest part of the S-shaped curve, so there’s plenty of fast growth to come. It could get stalled here and there by macro considerations, but that delay would only be temporary, just as it was with the Internet.

I agree. Thirty years ago we could not figure out what a home computer could possibly be used for, write some letters, balance the check book, and… what else? Counting mobile devices they must be in the billion range. With all the fun stuff AI will do there is no limit to the amount of data we will be able to play with.

Denny Schlesinger

1 Like

I was watching Cramer last night on Mad Money and he said that there are CEOs that are telling him that things are getting bad. However, although Cramer is hard to understand sometimes, as I understood it he was talking more about the industrial companies (PPG, FLR).

So we have the original post where the cloud companies are telling Cramer things are fine and last night’s Mad Money where some industrial CEOs are saying (publicly in some cases) that things are getting bad.

I think what we are seeing here with the cloud selloff is the market projecting that the pain that the cloud company customers are seeing is going to eventually get to the cloud companies themselves. Right now, things look fine to the cloud company CEOs, but the market is saying that the revenue estimates for the cloud guys are too optimistic.

This is just one Nimrod’s semi-informed opinion.

By the way, what is the history of mo-mo stocks coming back after their revenue growth has stalled? Is TWLO this generation’s GOOG or is it more likely to behave like NTAP after the Internet bust?

1 Like

Darnit, Tinker! My crystal ball just fogged over . . .

A while ago you posted something to the effect that asserted that the government has defined “long term” as 366 days. The holding time required for ST cap gains to become LT with respect to taxes. Maybe not very useful with respect to human relations, trade agreements and a host of other relationships, but for investing I thought that was a very useful definition, far more useful than “buy and hold” anyway.

But now you ask us to peer into the future. And my crystal ball is giving me a 401 message. I’ll take a look at what I know from the past, some things have a way of not changing very much over time, so maybe some of my past experience is relevant.

Where I worked, the IT shop had three major segments: there was the application and DBA folks, the network folks and the h/w folks (compute and storage). Networks an h/w were under the same management umbrella. Applications and DBAs were under a different umbrella, though they all came together under some poor executive near the top of the management team. I say “poor executive” because the company kept shifting the job around until they finally created the position of CIO who reported to the VP of engineering. Maybe some of these boundaries have merged or become more fuzzy with cloud computing, but the fact remains that the same technology functions are still required in order to deliver information services. And ultimately, the delivery of information is what it is all about. The technology is a means to an end, not an end unto itself. I always used to tell the h/w and network guys that they deal with the plumbing, its us s/w folk that deliver the water fit for consumption.

The reason I bring this up is because the question you are asking expresses itself in the budget cycle. How does a big company decide how it will allocate IT dollars? I’ve never been party to the top decision on how the IT pool is decided, but I imagine it’s not too different from the lower allocations. I assume there’s two primary components that feed the request, the recent historical spend (maybe the last four or five quarters) along with the forecast. Those are the primary components that feed the request. The senior executives each bring their requests forward and argue about budget requirements, eventually a decision is made and so much in total goes to IT.

Once the IT pool is established, irrespective of segment it’s divide into two buckets. Maintenance which keeps the “lights on” and development. I was not a h/w or network guy, but I was party to some of the budget discussions. The group that always got the highest priority and never got turned down was the storage group. And their requirements always got bigger. They always got bigger because every new byte was in addition to everything that was there before. Nothing (OK, almost nothing) was deleted, purged or even archived in a more compact form.

My numbers might be off, because I participated in a study a number of years ago and while I don’t recall the stats exactly, something like 20% of the stored data, give or take, was useless or stale. Retirement party attachments, personal emails, just old and out of date stuff, etc. Digital gunk of no value. Another 20% or so was orphaned. There was no longer anyone who took responsibility for it. The author may have retired, transferred, whatever. They left a digital legacy behind maybe some of it still had some utility, but it was not being maintained and kept up to date. For example, how many of you have a Myspace account? It’s probably still out there. Made no difference. Storage said we don’t have any control over that and pretty much got the requested budget. Networks would assert, if you don’t have the capacity to transmit that stuff, storing it doesn’t do anyone any good. It’s hard to argue with that logic mainly because it’s pretty solid. They usually got their budget request. That left the s/w folks and the compute h/w folks who were tied to the s/w folks. That’s where the budget battles took place.

Maintenance was not a very high esteem job, but it was quite secure. Maintenance kept the rest of the production operations of the company running. These were the folks that did bug fixes, minor enhancements, tuning, etc. For COTS s/w (commercial off the shelf software) they interfaced with the vendor, submitted bug reports, enhancement requests, tested and coordinated implementation of version upgrades and so forth. Related compute h/w folks had to insure adequate capacity to handle peak loads and whatever else they did, I never paid too much attention to it.

Last in the budget priority list came development. This is where I spent the majority of my career. When I started, development meant the stuff we were going to build. By the time I retired, every set of requirements first went through a build/buy analysis. There was a clear partiality for buying. But there’s an interesting s/w budget phenomena known as the Jones Law of Budget Migration (Mylar Jones was an analyst who was a member of the Yourdon Group). The law states that dollars always migrate from that which is tightly measured and managed to that which is less rigorously measured and managed. Although development was the last group to get funded in the budgetary process, they consistently would overspend their budget and more dollars were consistently found post the annual budget cycle in order to feed development.

The project tripod is schedule, resource (budget) and requirements. Mess with any leg of the tripod and you disturb the other two. Anyone who’s ever participated on a major s/w development project knows the term, “requirements creep.” Maintenance requirements would often find their way into development projects. And every expert user that was on assignment to provide requirements to the development team would always find new, must have requirements after the project was planned and underway. Where I worked, schedule was sacrosanct, so if requirements expanded which they inevitably would more budget was required in order to maintain schedule. That was the theory, but if you’re familiar with the Mythical Man Month (author Fred Brooks), you pretty soon bump into a limit where increased budget won’t save the schedule you can only demand so much OT, once you start adding new people it can be counter-productive. Agile methodology was developed at least in part from recognition of requirements creep. I’m not sure how it addresses budgetary requirements as it was just coming into practice where I worked around the time of my retirement.

So, how does the annual IT budget cycle interplay with the s/w products many of us are invested in (and to some extent h/w as well). Well, first off, I suspect maintenance hasn’t changed much. They still interface with the vendor on an on-going basis and perform pretty much all the functions they did when COTS was purchased under a perpetual license with annual maintenance contracts. The maintenance contract is now the annual subscription fee. Companies are spared the big up-front acquisition cost, which makes the vendor sales job a whole lot easier (though there’s still spin up costs in training, business process redesign, deployment and other implementation costs).

In that the build/buy analysis is clearly biased towards purchase our s/w vendors have a better shot at landing contracts than they might have had 10 years ago. And as already mentioned an annual subscription is a whole lot easier to digest than a big acquisition spend. The “land and expand” model perfectly fits the new role of development teams which is to gather and analyze user requirements and develop s/w solutions (even if that means subscribing to a commercial product). And there’s usually still some internal s/w development required. Despite products like Mulesoft data conversion and integration still requires some internal s/w development (well, maybe not, I retired a while ago). Testing is an in house activity much of which utilises s/w scripts. There’s the previously mentioned spin-up costs which are born by development. And then there’s the phenomena of requirements creep addressed by the expand part of vendor business model. This is exactly why these vendors partition their products the way they do.

Is this going to reach a saturation point which I think gets to Tinker’s original question. Certainly not in the long term (366 days). How about the longer long term. Five years or so?

Well, I just spent a lot of words addressing the annual IT budget cycle. But there’s another aspect of IT spend. That’s the 3 year (give or take) refresh cycle and the once in a while technology interjection cycle. First, what’s the difference. The technology interjection cycle is a major shift in the way computing is undertaken. The transition from dumb 3270 character displays to smart terminals with GUIs is an example. The transition from mainframe to distributed processing is an example. And now, the move to cloud computing is an example. This a paradigm shift (overused term) in the way companies conduct IT. For clarity, the refresh cycle pretty much maintains continuity with what was there before, but upgrades it with respect to capability, performance and form factor. My first smart terminal came with a tower computer, about three years later I got a desktop computer, later I got a portable laptop computer, that was followed by a more powerful laptop that could host better, more secure s/w and so on. Sometimes, it’s not so easy to distinguish between technology interjection and refresh. Was the transition from tape to farms of 8" disc packs to RAID devices and now flash a refresh or an injection? I tend to think of it as injections, but maybe not.

In any case, these two IT spend cycles are probably the ones most germane to Tinker’s question. First, they are on a longer time frame, second, they drive the advancement of the entire IT environment. So is there a finite number of data centers that will service the world? Probably, but I don’t think we’re anywhere close to it whatever the magic number might be. If there’s a relationship between data centers and s/w sales (and I think a correlation can easily be demonstrated, but I don’t have the stats to prove it) then there’s tons more s/w sales opportunities available.

How is TAM defined? How do you account for Africa when establishing TAM? Pretty much ignore it? The Chinese (you know the world’s 2nd largest economy with which we started a trade war) is actively engaged in infrastructure and trade development all over Africa. To America, it’s just a collection sh***ole countries. But those countries are rich with natural and human resources. Yeah, there’s tribal warfare and civil strife and rampant corruption all the rest of all that (does that really sound so different from the USofA, well OK we don’t quite have open warfare at present), but there’s also big cities, an increasingly well educated population, expanding modern infrastructure, rapidly growing business needs and so forth. Not evenly distributed across the enormous continent, but clearly the march is forward into the information age.

How about China and the rest of Asia? Have we ceded that to the Chinese? Increasingly it looks like we have, we pulled out of TPP. But might some of our companies find large markets there if allowed to play on that turf? Last year when I was in China almost everywhere you went you had to pay with cash. This year every little hole-in-the-wall store accepts Tencent pay and Alipay. A few days ago I was on top of a mountain at the Longjing terraced rice fields (an unbelievable sight), there was an old lady with an open air booth selling handicrafts and such, no cash required, pay with your phone (yes, there was cell phone service on top of a mountain in rural China). In about a year, China has pretty much transitioned to an almost cashless society. And that means all those cash transactions that used to be invisible are now tagged with who paid, when they paid, how much they paid, where they paid and what they bought. But nevermind, we’d rather start a trade war than penetrate that potential market with the largest and fastest growing middle class in the world.

I’m not trying to get political. The question is, I think, will we bump up against an upper limit for the kinds of investments we focus on as we approach the limit of data centers and if so is that going to happen in the foreseeable future? I think the answer is yes, sort of, it will if we cede future growth to the Chinese which seems to be our current course.

The Chinese more or less invented international commerce and capitalism via the Silk Road, the trade routes to markets in the West. They tried to expand this in the 1400s (read 1421, The Year China Discovered the World by Gavin Menzies) only to find primitive peoples and few new trading partners. They are at it again with Xi Jinping’s multibillion dollar Belt and Road initiative. The communist revolution was a 30 year aberration (apx 1950 - 1980) in their 5,000 year history (OK, they had a long feudal period. I’m not sure what marked the end of that. Probably around 200 BC when the warring states were unified under the Qin dynasty). I think the markets are there and the opportunities will remain ripe for several many years to come if we don’t blow it and simply turn our backs on it.

While I have a lot of reservations about investing in Chinese businesses, I have no doubt that the Chinese will displace the US as the world’s largest economy, probably in my lifetime of which there is not that much remaining. I don’t think the number of data centers is really a significant factor. Or at least, not the determining factor. What will make the difference is political smarts, financial smarts business smarts and societal smarts acting pretty much in unison. From my perspective, that’s not happening here and now (I mean the US when I say here, even though I’m currently in China).

15 Likes

https://www.networkworld.com/article/3313319/private-cloud/p…

This shows cloud datacenter spending still increasing and private dc decreasing. Private cloud in datacenter is the new norm.

I have seen first hand the explosive requirements of IOT on data, every mfg company will be investing in this to maintain competitive operations. It only makes sense to do private cloud or as a service.

4 Likes