Fastly presents to Merrill Lynch

Celebration Fl to Fort Worth TX is over 1,100 miles, not 175 miles.

1 Like

The sources of the articles one quotes matters. Each of the two articles quoted have problems:

Problem 1: Expecting an investing site like The Street to get Edge Computing right. And even then it doesn’t help to pick platitudes they write to quote:
By conducting operations on the edge, systems and networks can perform more reliably, swiftly and efficiently without compromising functionality.

That quote is unsubstantiated nonsense. There’s nothing more reliable or efficient about computing at the Edge. Even “swiftly” is debatable, as more powerful central cloud servers can save more time than the additional network latency adds, for some operations.

At least quote basic truths like:
Latency issues constrain functionality

Even if the article doesn’t discuss the particular functionalities that are constrained. Which is one of my original points - not everything, heck, not most things, need ultra-low network latency.

Problem 2: Picking a legacy vendor like StackPath to get Edge Computing right. We’ve gone over this ground here before: Fastly even warns readers that their competition will try to present their outdated architecture as an advantage: legacy vendors have resorted to positioning this limitation of their networks as a strength. (https://www.fastly.com/blog/why-having-more-pops-isnt-always… )

The salient aspects to consider are pretty simple, even if building the right solution isn’t:
A) Local compute (Edge Endpoints) has zero network latency and ultimate privacy, but cost, size, and power requirements have to be considered.

B) Central cloud compute is the most scalable and with existing cloud technologies, arguably the most straightforward to implement. It is, however, the most dependent on network reliability and has the most network latency, each of whic can be somewhat addressed through standard distributed cloud computing techniques.

C) Edge Servers (to distinguish from Edge Endpoints) can reduce network latency even further, but bring in additional complexity as the compute is now very distributed, yet often needs to be co-ordinated.

There are other considerations as well, but not worth getting into here.

In short, the main advantage of Edge Computing is lower latency and partial independence from network issues. What’s important in any solution performance tuning, however, is overall solution performance. In most cases, saving a few milliseconds of network latency with a multitude of Edge Servers may be less beneficial than focusing on other aspects of the solution that may yield 10X, 100X times the time savings.

Going back to the Alexa example, today it already employs both Edge Endpoint and Central Cloud computing. It’s not hard to imagine a network of Edge Servers that might be able to respond more quickly to requests from Echo devices if the actions to be taken are local in nature. For instance, turning your lights on. However, if you’re re-ordering something you’ve previously bought via Alexa, then the central server might be best (unless you have a Fastly-style CDN network so that your local POP knows what you’ve previously ordered without having to pull from Amazon central).

If anything, this brings to light the problems with what I call Buzzword Investing. At the turn of the century, 3G was one such buzzword. It was “Mobile Broadband.” In 2006, people were wondering if it was worth the cost (https://www.nytimes.com/2006/07/30/technology/30iht-3G.html ). Of course, that was the year before the iPhone changed everything. It turned out the better investment was not in 3G infrastructure, but in the handheld device that made use of faster data (and even with the iPhone it took a few generations before it supported 3G).

So, when people “I want to invest in 5G” or “I want to invest in Edge Computing,” I am less sanguine. I’m sure there’s money to be made somewhere in companies related to those technologies, but not in the technology itself and I personally won’t invest until I see the “game changing” applications come out.

13 Likes

But, Starrob, by that description you are not just describing remote surgery, but surgery with a huge AI component. That is an entirely different issue.

Remote surgery would require local surgeons nurses etc on the spot in case of internet failure or mechanical failure. They would have to be suited up ,in the OR and no doubt would expect to be paid for their time.

1 Like

Smorgasbord1, your last paragraph is exactly the approach I am trying to take to Fastly,CDNs, 5g, Edge, IoT. I have not bought Fastly because I believe the true wealth building enterprise will not be Fastly but the “killer app” maker that uses Fastly’s edge infrastructure. So far though it’s Fastly that’s doing well in the stock market and I am still at a loss of any public company growing sales fast taking advantage of FSLY’s enabling tech.

So far IoT seems to be a lot of small applications rather than some big one I know of, or heading to a big one.

Can someone wake me up when the remote surgery discussion is over? :wink:

No disrespect meant to any posters, I just don’t see that as a very large segment in need of such in-depth discussion on this board…but I may just be a little slow and am missing something (wouldn’t be the first time).

28 Likes

Remote surgery would require local surgeons nurses etc on the spot in case of internet failure or mechanical failure. They would have to be suited up ,in the OR and no doubt would expect to be paid for their time.

Even without failures they need to be there to prepare the robot and the patient. Anesthesia, etc. The only remote staff is the surgeon.

Denny Schlesinger

1 Like