Fastly presents to Merrill Lynch

Fully agree, the information unveiled in this call is much more than Q1 CC and I truly believe making Joshua the CEO of FSLY is a huge success for Fastly and for shareholders.

Zoro
Super long FSLY

After the internet traffic surge normalizes, say in 2021, why should Fsly be able to maintain its 50%+ growth? We can hope computing @ Edge takes off, but we don’t know that for sure.

Texmex, you are thinking of Edge in terms of human traffic but the real driver will not be humans, 7.8 billion by last count, but IoT devices which I have heard estimated at 75 billion in one case and trillions in another (I don’t recall the time frames). Such an explosion of IoT devices practically guarantees that Edge will happen but it does not guarantee Fastly’s future.

Denny Schlesinger

10 Likes

Such an explosion of IoT devices practically guarantees that Edge will happen but it does not guarantee Fastly’s future.

Yes, and that’s also at least partially because what is “the edge” is fuzzy. For autonomous automobiles, the edge is today typically the car itself. There is talk of moving compute to 5G cell towers and the like, but that seems unlikely for autonomous driving use cases in my opinion. What Fastly could provide at the edge is traffic information and routing. No need for a central server to know about traffic in CA and MA, for instance, and even today when you route with traffic that’s often done on a server, not in the vehicle, so that server could ideally be a local “edge” server - no need to be central for all routing calculations, either.

Moving compute out of data centers to servers more locally positioned does make sense for some applications. Fastly already gives the example about the NY Times using edge compute for login and security, for instance. That means as a NY Times subscriber, you don’t have to log in to a NY Times central server, but can do so to one of Fastly’s edge servers. There will be even more examples for Fastly as Compute@Edge comes out of Beta.

That said, I still think that as compute continues to get cheaper, that some applications will move from a central server to the device itself and bypass the “edge” as a CDN might define it. I haven’t seen any studies that indicate what the relative percentages of where compute will end up might be. At this point the whole compute world is expanding so greatly that I think all compute locations will see new business.

But, despite my apparent poo-pooing of some of the edge compute cases being touted in the media today, I do think it is real and will have real advantages for some set of real use cases. And, the tricky part of the problem, which Fastly seems to have at least partially solved, is doing compute on local edge servers without having to “phone home” to the main central server. The NY Times login example proves they can do that today - certainly the NY Times’ own central server has the whole up-to-date truth of which users have valid accounts, yet Fastly treats that information CDN-style so that its POPs also have much of that information as well, and can update those POPs really really quickly (a Fastly advantage compared to other CDNs), and thus logins can happen locally most of the time.

6 Likes

That said, I still think that as compute continues to get cheaper, that some applications will move from a central server to the device itself and bypass the “edge” as a CDN might define it. I haven’t seen any studies that indicate what the relative percentages of where compute will end up might be. At this point the whole compute world is expanding so greatly that I think all compute locations will see new business.

Having observed computing for 60 years I can say that it moved from core to edge and back several times as technology and use cases developed. Right now IoT technology and use cases are pushing computing to the edge. The one unsurmountable obstacle I see in moving to core once again is the limiting effect of the speed of light.

Recently I read an article about mushrooms or some other kind of living plant that covered acres and acres of land but which was just one distributed organism. I can see worldwide computing evolving into something like that, everything connected to everything else on a need basis.

Science fiction becoming reality.

Denny Schlesinger

6 Likes

Recently I read an article about mushrooms or some other kind of living plant that covered acres and acres of land but which was just one distributed organism. I can see worldwide computing evolving into something like that, everything connected to everything else on a need basis.

Isn’t Tesla well on the way to this kind of structure?

1 Like

The one unsurmountable obstacle I see in moving to core once again is the limiting effect of the speed of light.

Yeah, I was in a thread on the paid Fastly board recently where someone was claiming that remote surgery was an edge-compute application. Which, of course, it isn’t - it’s an end to end command application. Even so, when that person said a 10ms delay was the maximum tolerable and that remote surgery would be done 1000 miles or more away, well I had to point out that was impossible. The speed of light through the best fiber optic cable over 1000 miles would take 16ms - and that’s without repeaters, routers, copper to fiber conversion at the ends, etc.

I think we have to be careful about the edge compute hype. It’s not for remote surgery, and it’s probably not for autonomous vehicles, either. When you unlock your iPhone with your face, that computation is done at the edge - meaning the phone itself. This is the ultimate edge compute, and is done for privacy reasons. For navigation, there’s no reason why a CDN couldn’t provide traffic information to vehicles that then compute the best route locally. Mostly today that’s done on either central or edge servers, but privacy concerns may force companies to switch.

So, the question in my mind becomes - when is the best place to perform compute not central and not in the device itself? That would seem to be situations where the device is underpowered and either the central server is over-taxed or the latency is too long. This equation with parens:


 **Underpowered End AND (Overtaxed Central OR High Central Latency) = Non-Endpoint Edge Compute**

So, do not take just latency or off-loading of central servers on their own. And don’t forget to factor in network reliability and privacy concerns.

8 Likes

Yes, Poffrings lists many more edge computing use cases including iOT and Fsly has competition in edge computing. They seem to have advantages sure. My main point though was that everything was well known before March. What changed after the EC was the rev growth change which was directly linked to the higher internet traffic, a short term factor. We do not know how Computing at Edge will add to rev growth yet. We will know only in 2021.

1 Like

Denny,
I realize this thread on Fsly. But the iOT discussion reminds me. Recently Crowdstrike CEO in an investor called that a good growth opportunity for them. Crowdstrike is a lightweight agent occupying just 35MB and works off Linux and MS.

4 Likes

Yeah, I was in a thread on the paid Fastly board recently where someone was claiming that remote surgery was an edge-compute application. Which, of course, it isn’t - it’s an end to end command application.

The following took quite awhile for me to figure out, mostly because various companies have thrown around the term “Edge” but the word “Edge” in technology means different things depending on who is using the term.

There is the cloud compute model where compute can take place in the cloud. A example of compute taking place in the cloud is Amazon’s Alexa, where the information the Echo “hears” is not processed locally on the Amazon Echo but sent to AWS over the internet to be processed in the cloud by a compute labelled Alexa and then a response is sent back back to the Echo over the internet from Alexa in the AWS cloud.

When Fastly uses the term Edge, what Fastly considers the Edge is mostly having the compute in their Points of Presence (POPs), which is “nearer” to where the information is either produced or used and Fastly calls this the “Edge cloud” https://www.fastly.com/edge-cloud-platform#:~:text=Fastly….

The most popular use of the word “Edge” works by processing information either at the device/sensor or very, very close by to where the information is either produced or used at device/sensor. Companies like these micro data centers https://www.networkworld.com/article/3445382/10-hot-micro-da…. , for instance, might consider the Edge as compute that takes place 20 miles or less from where the data is generated or used.

With the way I have seen remote surgery described, remote surgery will not use the cloud compute model nor the Fastly “Cloud Edge” compute model but the information from the video feed from the cameras in the operating room for remote surgery will likely be processed right next to the operating room or somewhere in the hospital and then sent over the internet to screens where the surgeon is located. In a similar fashion, the commands from the surgeon will NOT use the cloud compute model nor the Fastly “Cloud Edge” compute model but will likely have commands processed in a location very near the Surgeon (possibly within the room the surgeon is in or within the building the surgeon is in).

So, remote surgery is both an edge-compute and end to end command application.

What really confused me about the whole “edge” discussion in the past is that the edge means different things to different people. When Fastly talks moving the “edge” closer, they are talking about taking the compute out of the cloud or origin server and placing the compute in a Fastly Point of Presence or in a Fastly Managed Point of Presence. Fastly calls what they do the “Edge cloud platform” which should not be confused with what most people mean when they say “Edge Computing”.

When companies like American Tower or Equinix or edge computing data centers like EdgeMicro https://www.lightreading.com/cloudflare-to-use-edgemicros-ed… talk Edge computing, they are talking about putting the compute less than 10 or 20 miles from where the data is generated or used. In contrast, the Fastly Points of Presence might be hundreds of miles from where the end use application is.

For quite awhile I confused and mish-mashed what Fastly means by the “Edge” and what a company like Equinix means by the “Edge”. Anyone, that like me, confused the two meanings will have also likely been bamboozled by a lot of the discussion around the “Edge”.

Starrob

27 Likes

Here is an example of bringing AR to the edge with FSLY and NEXCF. NexTech has a two-pronged strategy for rapid growth including growth through acquisition of eCommerce businesses and growth of its omni-channel AR SaaS platform called ARitize™.
https://www.globenewswire.com/news-release/2020/06/25/205330…

They have some AR demo’s on their website for anyone interested. https://www.nextechar.com/
Looks like they may do very well in the new environment.

There is the cloud compute model where compute can take place in the cloud. A example of compute taking place in the cloud is Amazon’s Alexa, where the information the Echo “hears” is not processed locally on the Amazon Echo but sent to AWS over the internet to be processed in the cloud by a compute labelled Alexa and then a response is sent back back to the Echo over the internet from Alexa in the AWS cloud.

The reality is, as one should expect, more complicated. Today’s world is rarely all cloud compute, and Alexa is actually a good example of that.

First up is Wake Word detection. This is done locally at the edge - that is, on the Echo device itself. If you’ve ever wondered why Amazon doesn’t let you choose your own Wake Word, it’s because of the limitations on the device itself. They’re keeping costs down.

Second is far-field technology beam forming. Most Echos have more than 1 microphone, and there’s processing on the device itself (the edge) to determine which microphone is receiving the strongest signal (typically lights on the device show you which microphone it’s chosen), and then there’s additional noise-canceling processing to make the signal clearer.

Finally, the recorded sound is sent to the cloud. Even here, Amazon is running a distributed cloud so voice recordings from NY are not sent to San Francisco for processing, for instance. Yet, all your account information has to be available to you, which is probably sourced from some central database.

There’s also the issue that Alexa doesn’t work if you don’t have an internet connection. If you’re using Alexa to control your lighting or thermostat, for instance, what happens if your ISP is having issues? Do you sit in a cold, dark house? Apple balances what’s done on the device versus in the cloud not just for reliability, but also performance and privacy. Who knows if Amazon will tweak their distributed processing model? These kinds of things happen all the time, under the covers.

Fastly calls what they do the “Edge cloud platform” which should not be confused with what most people mean when they say “Edge Computing”.

I don’t know where you’re getting “most people” from, but Fastly’s use is perfectly justified. Fastly has a good number of world-wide distributed servers on which their upcoming Compute@Edge service will run. These are most definitely considered Edge servers by almost anyone’s definition. What makes Fastly unique is that they don’t focus exclusively on having lots and lots of POPs to save a few milliseconds of network latency because they see greater value in saving hundreds of milliseconds with their custom-built SDN internet backbone and large, capable Edge servers.

It is wrong to think that Fastly’s Edge services are misnamed just because they don’t believe the Edge equals tons and tons of tiny servers, but instead having enough large-enough servers close-enough to users/devices to reduce overall latency even more.

9 Likes

I don’t know where you’re getting “most people” from, but Fastly’s use is perfectly justified. Fastly has 70-ish (growing all the time) POPs on which their upcoming Compute@Edge service will run.

When I refer to “Most People”, it is a reference to most news articles on the subject like this:

What edge computing essentially is is the ability for smart devices to perform these functions locally, either on the device itself or on a close-by edge server. "Edge” refers the edge of the network, as close as possible to the physical device that’s being used and running fewer processes in the cloud. By conducting operations on the edge, systems and networks can perform more reliably, swiftly and efficiently without compromising functionality.

Read More: https://www.thestreet.com/investing/edge-computing-how-to-in…

Most news articles on “Edge Computing” are not in reference to having the compute on a POP server possibly hundreds of miles away. Reading most news articles on the subject of Edge Computing might lead to confusion for people that don’t understand that a “Edge” use case Fastly might be good for, might be very different than a Edge use case that might call for the use of a micro data center (MDC) used in a “Edge computing” context: https://www.stackpath.com/edge-academy/micro-data-centers/

Also, I made a mistake with my Remote Surgery example. Edge Computing will likely be used with remote surgery but not in the way I described it in my first post. In the past, a issue was brought up about the speed of light not allowing response times of ten milliseconds if the operating room and surgeon are something like 1,000 miles away. Well, here is how researchers are looking to get around that problem:

“Edge computing means that you can place computing power and processing power really far out in the network. You can place a lot of critical software that relies on machine learning or computer assisted vision, which is an advantage you didn’t really have before.”

In remote surgery, robotic applications need to be able to analyse data on their own to provide assistance and make sure procedures are completed safely, quickly and accurately. As 5G surgeries move towards greater distances, different edge computing requirements will be needed to sustain the speed and reliability of the connection.

Read More: https://www.medicaldevice-network.com/features/5g-remote-sur…

In other words, remote surgery will have compute running sophisticated AI algorithms that will have less than a 10 millisecond response time if because of lag or even a surgeons mistake potentially causing a operating tool to accidentally cut into a major artery gets determined then there would be some machine learning algorithm running on a computer nearby that has the ability to analyze the data indicating that might happen and prevent the tool from cutting the artery with a faster response than the surgeon could give being located over a thousand miles away. So, Remote surgery will likely be a mixture of Edge Computing and command and control, if that use case should ever become widely available.

Starrob

With the way I have seen remote surgery described, remote surgery will not use the cloud compute model nor the Fastly “Cloud Edge” compute model but the information from the video feed from the cameras in the operating room for remote surgery will likely be processed right next to the operating room or somewhere in the hospital and then sent over the internet to screens where the surgeon is located.

What is this processing that you think is happening next door?

The point being made here is that the image information needs to get across the internet to the surgeon who then needs to react to that image and act on the reaction and the action then needs to go back across the internet to the surgical device.

2 Likes

What is this processing that you think is happening next door?

The point being made here is that the image information needs to get across the internet to the surgeon who then needs to react to that image and act on the reaction and the action then needs to go back across the internet to the surgical device.

Read my latest response in the response just before yours. I just said that description was in error. I will repeat it again:

Also, I made a mistake with my Remote Surgery example. Edge Computing will likely be used with remote surgery but not in the way I described it in my first post. In the past, a issue was brought up about the speed of light not allowing response times of ten milliseconds if the operating room and surgeon are something like 1,000 miles away. Well, here is how researchers are looking to get around that problem:

Edge computing means that you can place computing power and processing power really far out in the network. You can place a lot of critical software that relies on machine learning or computer assisted vision, which is an advantage you didn’t really have before.”

In remote surgery, robotic applications need to be able to analyse data on their own to provide assistance and make sure procedures are completed safely, quickly and accurately. As 5G surgeries move towards greater distances, different edge computing requirements will be needed to sustain the speed and reliability of the connection.

Read More: https://www.medicaldevice-network.com/features/5g-remote-sur…

In other words, remote surgery will have compute running sophisticated AI algorithms that will have less than a 10 millisecond response time if because of lag or even a surgeons mistake potentially causing a operating tool to accidentally cut into a major artery gets determined then there would be some machine learning algorithm running on a computer nearby that has the ability to analyze the data indicating that might happen and prevent the tool from cutting the artery with a faster response than the surgeon could give being located over a thousand miles away. So, Remote surgery will likely be a mixture of Edge Computing and command and control, if that use case should ever become widely available.

Starrob

1 Like

I know they have performed surgery remotely using DaVinci but I don’t believe that is the norm. I believe the surgeon and patient are usually in the same facility and don’t rely on the internet or the edge to perform procedures. Do we have anyone with direct robotic surgery experience that can address this?

1 Like

Read my latest response in the response just before yours. I just said that description was in error. I will repeat it again:

Surgery is not like a conversation where one speaks and then the other speaks. It’s a simultaneous two way communication, the patient sends a continuous stream of data, his image and whatever the instruments are picking up and the surgeon sends a continuous stream of data to the robot tools.

I wonder how much delay there can be before the surgeon loses his effectiveness.

When one is steering a boat there is quite a delay in the response which makes novices oversteer but you soon get the hang of it. I imagine surgeons would also get the hang of it. I don’t think it’s quite as critical as some of you paint it. I could be wrong.

The Captain

2 Likes

I wonder how much delay there can be before the surgeon loses his effectiveness.

The Captain

I have seen different answers to that question on the internet. I once saw a article in .pdf form on by researcher in remote surgery that said optimally 150 milliseconds was the max for making remote surgery widespread for the vision aspects of the surgery. To use haptics, it requires even lower response times, I forget how much was said.

To give people a idea of how much lag there can be in a system, check this article out:

(2015) Roger Smith, CTO of Florida Hospital Nicholson Center in Celebration, FL, tested lag times created by the internet between his facility and one in Ft. Worth, TX for potential remote surgery. The lag times were negligible and ranged from 30 to 150 milliseconds.

https://www.rawscience.tv/hospital-tests-lag-time-for-remote…

Celebration, FL to Ft. Worth, TX is about 175 miles apart and that lag sounds reasonable for the internet. Extrapolating that to 1,000 miles, the lag would be between 171 to 857 milliseconds.

Now, this article should be given a full read:

The Mimic Simulator was able to first artificially dial up lag times, starting with 200 milliseconds (100 milliseconds is one-tenth of a second) all the way up to 600 milliseconds.

At 200 milliseconds, surgeons could not detect a lag time. From 300 to 500 milliseconds, some surgeons could detect lag time, but they were able to compensate for it by pausing their movement. But at 600 milliseconds, most surgeons became insecure about their ability to perform a procedure, Smith said.

Read More: https://www.computerworld.com/article/2927471/robot-performs….

I am not going to get into that whole article but it is a interesting read. So, my interpretation of that is that if researchers want to make Remote Surgery available to ALL surgeons instead of only some surgeons, then 200 milliseconds lag should be the cutoff for safety reasons

From what I gather, most remote surgery is possible today within the USA from several hundred miles away strictly using command and control systems if there is a very reliable connection available but if we are talking attempting a remote surgery from something like New York City to Hamburg, Germany (3,807 mi), then to do it reliably, it would also be a Edge computing use case to compensate for the lag. That is just my interpretation from reading a few articles on the subject.

Starrob

The vast majority of all operations are still done locally. The likelihood of having a surgical robot but not having a surgeon who can use it is very small. More likely would be to have a surgeon operating the machine locally but have the video streamed to another specialist if necessary.

It also depends on how delicate the procedure is. A hernia repair could probably be done remotely with no issues, since there isn’t as much detail needed. A nerve sparing prostatectomy would require a lot more precision. You also need to add human response time to the system. It’s still possible, given that those structures are generally fixed. So the surgeon just has to go slow. But they can operate based on what’s on their screen, not necessarily what’s real time.

For robotic heart surgery that’s a problem. The heart is beating and you need to react quickly. Sometimes it’s a matter of timing your needle throw based on the beating heart, but if the heart takes an extra beat you have milliseconds to react. AI could probably help at some point, but I suspect that’s a long way off.

I do a lot of catheter and needle based procedures. Much of it could also be done robotically since I go by information on an X-ray machine or ultrasound which have inherent delays anyway to process the images.

It might make sense for some kind of more direct relay between the surgeon and the robot, which FSLY might be better equipped to handle, with any image compression or post processing handled by their edge computing. I think it’s far enough away in the future that it’s unlikely to be significant in the investment thesis today.

6 Likes

Celebration Fl to Fort Worth TX is over 1,100 miles, not 175 miles.

1 Like

The sources of the articles one quotes matters. Each of the two articles quoted have problems:

Problem 1: Expecting an investing site like The Street to get Edge Computing right. And even then it doesn’t help to pick platitudes they write to quote:
By conducting operations on the edge, systems and networks can perform more reliably, swiftly and efficiently without compromising functionality.

That quote is unsubstantiated nonsense. There’s nothing more reliable or efficient about computing at the Edge. Even “swiftly” is debatable, as more powerful central cloud servers can save more time than the additional network latency adds, for some operations.

At least quote basic truths like:
Latency issues constrain functionality

Even if the article doesn’t discuss the particular functionalities that are constrained. Which is one of my original points - not everything, heck, not most things, need ultra-low network latency.

Problem 2: Picking a legacy vendor like StackPath to get Edge Computing right. We’ve gone over this ground here before: Fastly even warns readers that their competition will try to present their outdated architecture as an advantage: legacy vendors have resorted to positioning this limitation of their networks as a strength. (https://www.fastly.com/blog/why-having-more-pops-isnt-always… )

The salient aspects to consider are pretty simple, even if building the right solution isn’t:
A) Local compute (Edge Endpoints) has zero network latency and ultimate privacy, but cost, size, and power requirements have to be considered.

B) Central cloud compute is the most scalable and with existing cloud technologies, arguably the most straightforward to implement. It is, however, the most dependent on network reliability and has the most network latency, each of whic can be somewhat addressed through standard distributed cloud computing techniques.

C) Edge Servers (to distinguish from Edge Endpoints) can reduce network latency even further, but bring in additional complexity as the compute is now very distributed, yet often needs to be co-ordinated.

There are other considerations as well, but not worth getting into here.

In short, the main advantage of Edge Computing is lower latency and partial independence from network issues. What’s important in any solution performance tuning, however, is overall solution performance. In most cases, saving a few milliseconds of network latency with a multitude of Edge Servers may be less beneficial than focusing on other aspects of the solution that may yield 10X, 100X times the time savings.

Going back to the Alexa example, today it already employs both Edge Endpoint and Central Cloud computing. It’s not hard to imagine a network of Edge Servers that might be able to respond more quickly to requests from Echo devices if the actions to be taken are local in nature. For instance, turning your lights on. However, if you’re re-ordering something you’ve previously bought via Alexa, then the central server might be best (unless you have a Fastly-style CDN network so that your local POP knows what you’ve previously ordered without having to pull from Amazon central).

If anything, this brings to light the problems with what I call Buzzword Investing. At the turn of the century, 3G was one such buzzword. It was “Mobile Broadband.” In 2006, people were wondering if it was worth the cost (https://www.nytimes.com/2006/07/30/technology/30iht-3G.html ). Of course, that was the year before the iPhone changed everything. It turned out the better investment was not in 3G infrastructure, but in the handheld device that made use of faster data (and even with the iPhone it took a few generations before it supported 3G).

So, when people “I want to invest in 5G” or “I want to invest in Edge Computing,” I am less sanguine. I’m sure there’s money to be made somewhere in companies related to those technologies, but not in the technology itself and I personally won’t invest until I see the “game changing” applications come out.

13 Likes