Fastly presents to Merrill Lynch

This is a MUST read for anyone already invested or thinking of investing in Fastly:
https://investors.fastly.com/files/doc_downloads/transcript/…

It literally is Fastly management presenting to analysts what Fastly’s business is going to do. The story presented is so good, it’s probably why FSLY has shot up in the 3 weeks since. You should read every word, but I’ll pick and choose some of the highlights:

CEO Joshua Bixby started out by talking about a “digital transformation” that is occurring. This is organizations “looking to build differentiation.”

fundamentally, we believe that all of the data that is now transiting the Internet will benefit from a performance scale and security front by using an edge cloud. We believe and we see this with our innovative customers that the most innovative customers have a new architecture. And that architecture involves their code at our edges.

Bixby was asked about legacy CDN providers having flat growth. Bixby responded that I think we are growing faster than 30%, coming from 2 main areas:
• TAM is growing. More content going online.
• Fastly sells to a different buyer.

For the TAM: there’s a new architecture for the Internet. It’s Twilio, and it’s Fastly, and it’s Datadog, and it’s Slack for communication. I mean we are part of a user – a serverless and user-based – usage-based model that we’re seeing come out of this very successfully.

For the different buyer: the traditional content delivery network is an IT product. It sells to an IT buyer [that] is becoming subservient to the developer because organizations aren’t buying differentiation. They’re building it, and they’re putting that in the hands of the developer.

He then addresses the VOD market (one that Limelight concentrates on, for instance):
One of the markets that is sort of overshadows, I think, historically or has overshadowed historically is the video-on-demand market. And this is a highly competitive market. This is a market that’s comprised of Akamai and Limelight and Level 3 and High Winds and Verizon edge cast, and this is a market which you characterized by incredible price compression, very low margins. And price is the dominant sort of artifact. And that’s because much of the content that goes over these networks are very difficult to monetize and increasingly difficult to monetize.

Fastly’s not “really” in that side of the market, and so isn’t exposed to that price compression. Bixby points to posts from Dan Rayburn (see my LLNW post for a link for instance) on that competition. Fastly’s in a different side of the video business, a “very high-value” side, which is live streaming and VOD that actually feels like live.

Bixby describes 4 moats:

  1. Fastly is a programmable edge. Enables the developer to bring their code to Fastly’s edges. Example: NY Times. Not just the article content, but the paywall and personalization. Performance and scale and security.

  2. Unique architecture. Fastly has its own SDN (Software Defined Network). It’s a single network, not a hodge-podge of legacy networks strung together. Bixby gives an example of someone who might have had a secure network for HTTPS and one for HCP and they couldn’t send traffic from one network to the other because of security concerns. It’s also far more efficient: Fastly has 2,500-ish servers in our network. Our largest competitor has 270,000 and we are faster and more performant than they are.

  3. Security in Depth. Again, not a hodge-podge of different technologies from multiple acquisitions needing integration, Fastly has built a toolbox for security developers and security practitioners to allow them to see what is happening in real time in the network and react to it in real time. So our reaction times are measured in seconds and milliseconds.

  4. Customer empowerment. We are a usage-based system. We are like the cloud. So tomorrow night, you have inspiration at midnight, you sign up, you’re online, and our product is available in 30 seconds. All of the documentation is online. You have full access to the product. You can start using it. All of our pricing is available publicly. You don’t have to get a massive professional service organization. Bixby points out that their “largest competitor” was generating 8% to 10% of their revenue from professional services (before they stopped breaking that out). As he put it: the anti-cloud methodologies of the competitors, which is you fill in a form. You call a salesperson. You have to go for a steak dinner. You get a team of professional services people… And professional services is required in order to make any major changes to the product offering

On the Cloud Titans. Fastly’s partnerships with the Cloud Titans “speaks to the sophistication of what we’ve built.” And there’s been some recent posts, which we can’t speak publicly about around that we’re really on the crown jewels of some of these organizations.

Fastly enables customers to easily switch between cloud providers, which is important to them as it helps them negotiate better pricing when the cloud providers know they can easily switch. Fastly also enables customers to have a single security perimeter no matter which cloud is being used.

Then Bixby dives into CDN complexity that is a bit obtuse, but worth knowing as it shows Fastly’s differentiation. He gives the example of the Super Bowl or Champions League Soccer. He admits that with so many eyeballs, no one edge network has the capacity today to deliver it and give you redundancy, so what you’re going to do is find 3 or 4 edges. Fastly would be one of those edges, providing ? to ¼ of the traffic.

But, then there’s the middle tier - topographically speaking, not geographically speaking. The mid-tier provides the content to the edges when the edges don’t have a cache it. Fastly has a separate mid-tier product to provide optimization and shielding. all of these edges actually go back to another Fastly product. And that Fastly product, [for] the Super Bowl, would be media Shield. And the reason we do that and the reason organizations do that is because one of the things they really want their core central cloud to do is to be doing the encoding…

…we can offload almost completely that central cloud from doing any of the serving work. … very strategically important to be in that middle tier on these types of events. That’s a very high-value position.

This is something that’s often overlooked with all they hype about “the edge.” Bixby is saying that CDNs aren’t just about the edge, but in providing a complete content delivery services. Customers provide that content to Fastly once, and Fastly takes care of the rest. Even if multiple edge providers are utilized, it still goes through a Fastly mid-tier product.

This is getting long, so I’m going to skip the section on security in order to get to the CFO Adriel Lares’ presentation, which is of interest to more people here:

Usage peaked in late March due to the lockdowns, but April remained elevated compared to pre-lockdown levels. They do expect a more normal situation starting July 1 … onwards and that’s how they built their guidance through Q4.

Lares was asked about Gross Margins. When Fastly went public, their target was 70%+. They are now around 60%. Improvements are expected from scale: buying more bandwidth will result in better unit pricing (40% today, shooting for 70% like their legacy competitors have). So, size matters.

On markets: We’re in about 55 markets around the world. We think we need to be in about 100, excluding China, to cover the world. I think once you get closer to that sort of critical mass, I think we will begin to see additional leverage in …real estate cost.

There will be further investment in security offerings and they have great expectations for Compute@Edge. These are higher leverage products for Fastly since they don’t need an increase in bandwidth capacity, they’re at the edge.

Fastly expects new, higher margin business as they expand with Compute@Edge and to more customers wanting security.

After reading this, I’m even more excited about my investment in FSLY, and so despite the recent run-up, added some more today.

95 Likes

Smorg,
I listened to that call and have also read Poffringa’s and MF’s work. I understand Fastly’s moat to some extent. I am invested in it and glad about its incredible run. What surprises me a lot though is that pretty much all of the “moat” etc. existed even pre covid but the stock was growing only around 38%. The only thing that changed is that Covid caused internet traffic to surge. This enabled Fastly to increase it’s 2020 rev. outlook from 30% to 50% for this year. Perhaps this caused more people to take notice which is normally good for the SP. After the internet traffic surge normalizes, say in 2021, why should Fsly be able to maintain its 50%+ growth? We can hope computing @ Edge takes off, but we don’t know that for sure.

8 Likes

Fully agree, the information unveiled in this call is much more than Q1 CC and I truly believe making Joshua the CEO of FSLY is a huge success for Fastly and for shareholders.

Zoro
Super long FSLY

After the internet traffic surge normalizes, say in 2021, why should Fsly be able to maintain its 50%+ growth? We can hope computing @ Edge takes off, but we don’t know that for sure.

Texmex, you are thinking of Edge in terms of human traffic but the real driver will not be humans, 7.8 billion by last count, but IoT devices which I have heard estimated at 75 billion in one case and trillions in another (I don’t recall the time frames). Such an explosion of IoT devices practically guarantees that Edge will happen but it does not guarantee Fastly’s future.

Denny Schlesinger

10 Likes

Such an explosion of IoT devices practically guarantees that Edge will happen but it does not guarantee Fastly’s future.

Yes, and that’s also at least partially because what is “the edge” is fuzzy. For autonomous automobiles, the edge is today typically the car itself. There is talk of moving compute to 5G cell towers and the like, but that seems unlikely for autonomous driving use cases in my opinion. What Fastly could provide at the edge is traffic information and routing. No need for a central server to know about traffic in CA and MA, for instance, and even today when you route with traffic that’s often done on a server, not in the vehicle, so that server could ideally be a local “edge” server - no need to be central for all routing calculations, either.

Moving compute out of data centers to servers more locally positioned does make sense for some applications. Fastly already gives the example about the NY Times using edge compute for login and security, for instance. That means as a NY Times subscriber, you don’t have to log in to a NY Times central server, but can do so to one of Fastly’s edge servers. There will be even more examples for Fastly as Compute@Edge comes out of Beta.

That said, I still think that as compute continues to get cheaper, that some applications will move from a central server to the device itself and bypass the “edge” as a CDN might define it. I haven’t seen any studies that indicate what the relative percentages of where compute will end up might be. At this point the whole compute world is expanding so greatly that I think all compute locations will see new business.

But, despite my apparent poo-pooing of some of the edge compute cases being touted in the media today, I do think it is real and will have real advantages for some set of real use cases. And, the tricky part of the problem, which Fastly seems to have at least partially solved, is doing compute on local edge servers without having to “phone home” to the main central server. The NY Times login example proves they can do that today - certainly the NY Times’ own central server has the whole up-to-date truth of which users have valid accounts, yet Fastly treats that information CDN-style so that its POPs also have much of that information as well, and can update those POPs really really quickly (a Fastly advantage compared to other CDNs), and thus logins can happen locally most of the time.

6 Likes

That said, I still think that as compute continues to get cheaper, that some applications will move from a central server to the device itself and bypass the “edge” as a CDN might define it. I haven’t seen any studies that indicate what the relative percentages of where compute will end up might be. At this point the whole compute world is expanding so greatly that I think all compute locations will see new business.

Having observed computing for 60 years I can say that it moved from core to edge and back several times as technology and use cases developed. Right now IoT technology and use cases are pushing computing to the edge. The one unsurmountable obstacle I see in moving to core once again is the limiting effect of the speed of light.

Recently I read an article about mushrooms or some other kind of living plant that covered acres and acres of land but which was just one distributed organism. I can see worldwide computing evolving into something like that, everything connected to everything else on a need basis.

Science fiction becoming reality.

Denny Schlesinger

6 Likes

Recently I read an article about mushrooms or some other kind of living plant that covered acres and acres of land but which was just one distributed organism. I can see worldwide computing evolving into something like that, everything connected to everything else on a need basis.

Isn’t Tesla well on the way to this kind of structure?

1 Like

The one unsurmountable obstacle I see in moving to core once again is the limiting effect of the speed of light.

Yeah, I was in a thread on the paid Fastly board recently where someone was claiming that remote surgery was an edge-compute application. Which, of course, it isn’t - it’s an end to end command application. Even so, when that person said a 10ms delay was the maximum tolerable and that remote surgery would be done 1000 miles or more away, well I had to point out that was impossible. The speed of light through the best fiber optic cable over 1000 miles would take 16ms - and that’s without repeaters, routers, copper to fiber conversion at the ends, etc.

I think we have to be careful about the edge compute hype. It’s not for remote surgery, and it’s probably not for autonomous vehicles, either. When you unlock your iPhone with your face, that computation is done at the edge - meaning the phone itself. This is the ultimate edge compute, and is done for privacy reasons. For navigation, there’s no reason why a CDN couldn’t provide traffic information to vehicles that then compute the best route locally. Mostly today that’s done on either central or edge servers, but privacy concerns may force companies to switch.

So, the question in my mind becomes - when is the best place to perform compute not central and not in the device itself? That would seem to be situations where the device is underpowered and either the central server is over-taxed or the latency is too long. This equation with parens:


 **Underpowered End AND (Overtaxed Central OR High Central Latency) = Non-Endpoint Edge Compute**

So, do not take just latency or off-loading of central servers on their own. And don’t forget to factor in network reliability and privacy concerns.

8 Likes

Yes, Poffrings lists many more edge computing use cases including iOT and Fsly has competition in edge computing. They seem to have advantages sure. My main point though was that everything was well known before March. What changed after the EC was the rev growth change which was directly linked to the higher internet traffic, a short term factor. We do not know how Computing at Edge will add to rev growth yet. We will know only in 2021.

1 Like

Denny,
I realize this thread on Fsly. But the iOT discussion reminds me. Recently Crowdstrike CEO in an investor called that a good growth opportunity for them. Crowdstrike is a lightweight agent occupying just 35MB and works off Linux and MS.

4 Likes

Yeah, I was in a thread on the paid Fastly board recently where someone was claiming that remote surgery was an edge-compute application. Which, of course, it isn’t - it’s an end to end command application.

The following took quite awhile for me to figure out, mostly because various companies have thrown around the term “Edge” but the word “Edge” in technology means different things depending on who is using the term.

There is the cloud compute model where compute can take place in the cloud. A example of compute taking place in the cloud is Amazon’s Alexa, where the information the Echo “hears” is not processed locally on the Amazon Echo but sent to AWS over the internet to be processed in the cloud by a compute labelled Alexa and then a response is sent back back to the Echo over the internet from Alexa in the AWS cloud.

When Fastly uses the term Edge, what Fastly considers the Edge is mostly having the compute in their Points of Presence (POPs), which is “nearer” to where the information is either produced or used and Fastly calls this the “Edge cloud” https://www.fastly.com/edge-cloud-platform#:~:text=Fastly….

The most popular use of the word “Edge” works by processing information either at the device/sensor or very, very close by to where the information is either produced or used at device/sensor. Companies like these micro data centers https://www.networkworld.com/article/3445382/10-hot-micro-da…. , for instance, might consider the Edge as compute that takes place 20 miles or less from where the data is generated or used.

With the way I have seen remote surgery described, remote surgery will not use the cloud compute model nor the Fastly “Cloud Edge” compute model but the information from the video feed from the cameras in the operating room for remote surgery will likely be processed right next to the operating room or somewhere in the hospital and then sent over the internet to screens where the surgeon is located. In a similar fashion, the commands from the surgeon will NOT use the cloud compute model nor the Fastly “Cloud Edge” compute model but will likely have commands processed in a location very near the Surgeon (possibly within the room the surgeon is in or within the building the surgeon is in).

So, remote surgery is both an edge-compute and end to end command application.

What really confused me about the whole “edge” discussion in the past is that the edge means different things to different people. When Fastly talks moving the “edge” closer, they are talking about taking the compute out of the cloud or origin server and placing the compute in a Fastly Point of Presence or in a Fastly Managed Point of Presence. Fastly calls what they do the “Edge cloud platform” which should not be confused with what most people mean when they say “Edge Computing”.

When companies like American Tower or Equinix or edge computing data centers like EdgeMicro https://www.lightreading.com/cloudflare-to-use-edgemicros-ed… talk Edge computing, they are talking about putting the compute less than 10 or 20 miles from where the data is generated or used. In contrast, the Fastly Points of Presence might be hundreds of miles from where the end use application is.

For quite awhile I confused and mish-mashed what Fastly means by the “Edge” and what a company like Equinix means by the “Edge”. Anyone, that like me, confused the two meanings will have also likely been bamboozled by a lot of the discussion around the “Edge”.

Starrob

27 Likes

Here is an example of bringing AR to the edge with FSLY and NEXCF. NexTech has a two-pronged strategy for rapid growth including growth through acquisition of eCommerce businesses and growth of its omni-channel AR SaaS platform called ARitize™.
https://www.globenewswire.com/news-release/2020/06/25/205330…

They have some AR demo’s on their website for anyone interested. https://www.nextechar.com/
Looks like they may do very well in the new environment.

There is the cloud compute model where compute can take place in the cloud. A example of compute taking place in the cloud is Amazon’s Alexa, where the information the Echo “hears” is not processed locally on the Amazon Echo but sent to AWS over the internet to be processed in the cloud by a compute labelled Alexa and then a response is sent back back to the Echo over the internet from Alexa in the AWS cloud.

The reality is, as one should expect, more complicated. Today’s world is rarely all cloud compute, and Alexa is actually a good example of that.

First up is Wake Word detection. This is done locally at the edge - that is, on the Echo device itself. If you’ve ever wondered why Amazon doesn’t let you choose your own Wake Word, it’s because of the limitations on the device itself. They’re keeping costs down.

Second is far-field technology beam forming. Most Echos have more than 1 microphone, and there’s processing on the device itself (the edge) to determine which microphone is receiving the strongest signal (typically lights on the device show you which microphone it’s chosen), and then there’s additional noise-canceling processing to make the signal clearer.

Finally, the recorded sound is sent to the cloud. Even here, Amazon is running a distributed cloud so voice recordings from NY are not sent to San Francisco for processing, for instance. Yet, all your account information has to be available to you, which is probably sourced from some central database.

There’s also the issue that Alexa doesn’t work if you don’t have an internet connection. If you’re using Alexa to control your lighting or thermostat, for instance, what happens if your ISP is having issues? Do you sit in a cold, dark house? Apple balances what’s done on the device versus in the cloud not just for reliability, but also performance and privacy. Who knows if Amazon will tweak their distributed processing model? These kinds of things happen all the time, under the covers.

Fastly calls what they do the “Edge cloud platform” which should not be confused with what most people mean when they say “Edge Computing”.

I don’t know where you’re getting “most people” from, but Fastly’s use is perfectly justified. Fastly has a good number of world-wide distributed servers on which their upcoming Compute@Edge service will run. These are most definitely considered Edge servers by almost anyone’s definition. What makes Fastly unique is that they don’t focus exclusively on having lots and lots of POPs to save a few milliseconds of network latency because they see greater value in saving hundreds of milliseconds with their custom-built SDN internet backbone and large, capable Edge servers.

It is wrong to think that Fastly’s Edge services are misnamed just because they don’t believe the Edge equals tons and tons of tiny servers, but instead having enough large-enough servers close-enough to users/devices to reduce overall latency even more.

9 Likes

I don’t know where you’re getting “most people” from, but Fastly’s use is perfectly justified. Fastly has 70-ish (growing all the time) POPs on which their upcoming Compute@Edge service will run.

When I refer to “Most People”, it is a reference to most news articles on the subject like this:

What edge computing essentially is is the ability for smart devices to perform these functions locally, either on the device itself or on a close-by edge server. "Edge” refers the edge of the network, as close as possible to the physical device that’s being used and running fewer processes in the cloud. By conducting operations on the edge, systems and networks can perform more reliably, swiftly and efficiently without compromising functionality.

Read More: https://www.thestreet.com/investing/edge-computing-how-to-in…

Most news articles on “Edge Computing” are not in reference to having the compute on a POP server possibly hundreds of miles away. Reading most news articles on the subject of Edge Computing might lead to confusion for people that don’t understand that a “Edge” use case Fastly might be good for, might be very different than a Edge use case that might call for the use of a micro data center (MDC) used in a “Edge computing” context: https://www.stackpath.com/edge-academy/micro-data-centers/

Also, I made a mistake with my Remote Surgery example. Edge Computing will likely be used with remote surgery but not in the way I described it in my first post. In the past, a issue was brought up about the speed of light not allowing response times of ten milliseconds if the operating room and surgeon are something like 1,000 miles away. Well, here is how researchers are looking to get around that problem:

“Edge computing means that you can place computing power and processing power really far out in the network. You can place a lot of critical software that relies on machine learning or computer assisted vision, which is an advantage you didn’t really have before.”

In remote surgery, robotic applications need to be able to analyse data on their own to provide assistance and make sure procedures are completed safely, quickly and accurately. As 5G surgeries move towards greater distances, different edge computing requirements will be needed to sustain the speed and reliability of the connection.

Read More: https://www.medicaldevice-network.com/features/5g-remote-sur…

In other words, remote surgery will have compute running sophisticated AI algorithms that will have less than a 10 millisecond response time if because of lag or even a surgeons mistake potentially causing a operating tool to accidentally cut into a major artery gets determined then there would be some machine learning algorithm running on a computer nearby that has the ability to analyze the data indicating that might happen and prevent the tool from cutting the artery with a faster response than the surgeon could give being located over a thousand miles away. So, Remote surgery will likely be a mixture of Edge Computing and command and control, if that use case should ever become widely available.

Starrob

With the way I have seen remote surgery described, remote surgery will not use the cloud compute model nor the Fastly “Cloud Edge” compute model but the information from the video feed from the cameras in the operating room for remote surgery will likely be processed right next to the operating room or somewhere in the hospital and then sent over the internet to screens where the surgeon is located.

What is this processing that you think is happening next door?

The point being made here is that the image information needs to get across the internet to the surgeon who then needs to react to that image and act on the reaction and the action then needs to go back across the internet to the surgical device.

2 Likes

What is this processing that you think is happening next door?

The point being made here is that the image information needs to get across the internet to the surgeon who then needs to react to that image and act on the reaction and the action then needs to go back across the internet to the surgical device.

Read my latest response in the response just before yours. I just said that description was in error. I will repeat it again:

Also, I made a mistake with my Remote Surgery example. Edge Computing will likely be used with remote surgery but not in the way I described it in my first post. In the past, a issue was brought up about the speed of light not allowing response times of ten milliseconds if the operating room and surgeon are something like 1,000 miles away. Well, here is how researchers are looking to get around that problem:

Edge computing means that you can place computing power and processing power really far out in the network. You can place a lot of critical software that relies on machine learning or computer assisted vision, which is an advantage you didn’t really have before.”

In remote surgery, robotic applications need to be able to analyse data on their own to provide assistance and make sure procedures are completed safely, quickly and accurately. As 5G surgeries move towards greater distances, different edge computing requirements will be needed to sustain the speed and reliability of the connection.

Read More: https://www.medicaldevice-network.com/features/5g-remote-sur…

In other words, remote surgery will have compute running sophisticated AI algorithms that will have less than a 10 millisecond response time if because of lag or even a surgeons mistake potentially causing a operating tool to accidentally cut into a major artery gets determined then there would be some machine learning algorithm running on a computer nearby that has the ability to analyze the data indicating that might happen and prevent the tool from cutting the artery with a faster response than the surgeon could give being located over a thousand miles away. So, Remote surgery will likely be a mixture of Edge Computing and command and control, if that use case should ever become widely available.

Starrob

1 Like

I know they have performed surgery remotely using DaVinci but I don’t believe that is the norm. I believe the surgeon and patient are usually in the same facility and don’t rely on the internet or the edge to perform procedures. Do we have anyone with direct robotic surgery experience that can address this?

1 Like

Read my latest response in the response just before yours. I just said that description was in error. I will repeat it again:

Surgery is not like a conversation where one speaks and then the other speaks. It’s a simultaneous two way communication, the patient sends a continuous stream of data, his image and whatever the instruments are picking up and the surgeon sends a continuous stream of data to the robot tools.

I wonder how much delay there can be before the surgeon loses his effectiveness.

When one is steering a boat there is quite a delay in the response which makes novices oversteer but you soon get the hang of it. I imagine surgeons would also get the hang of it. I don’t think it’s quite as critical as some of you paint it. I could be wrong.

The Captain

2 Likes

I wonder how much delay there can be before the surgeon loses his effectiveness.

The Captain

I have seen different answers to that question on the internet. I once saw a article in .pdf form on by researcher in remote surgery that said optimally 150 milliseconds was the max for making remote surgery widespread for the vision aspects of the surgery. To use haptics, it requires even lower response times, I forget how much was said.

To give people a idea of how much lag there can be in a system, check this article out:

(2015) Roger Smith, CTO of Florida Hospital Nicholson Center in Celebration, FL, tested lag times created by the internet between his facility and one in Ft. Worth, TX for potential remote surgery. The lag times were negligible and ranged from 30 to 150 milliseconds.

https://www.rawscience.tv/hospital-tests-lag-time-for-remote…

Celebration, FL to Ft. Worth, TX is about 175 miles apart and that lag sounds reasonable for the internet. Extrapolating that to 1,000 miles, the lag would be between 171 to 857 milliseconds.

Now, this article should be given a full read:

The Mimic Simulator was able to first artificially dial up lag times, starting with 200 milliseconds (100 milliseconds is one-tenth of a second) all the way up to 600 milliseconds.

At 200 milliseconds, surgeons could not detect a lag time. From 300 to 500 milliseconds, some surgeons could detect lag time, but they were able to compensate for it by pausing their movement. But at 600 milliseconds, most surgeons became insecure about their ability to perform a procedure, Smith said.

Read More: https://www.computerworld.com/article/2927471/robot-performs….

I am not going to get into that whole article but it is a interesting read. So, my interpretation of that is that if researchers want to make Remote Surgery available to ALL surgeons instead of only some surgeons, then 200 milliseconds lag should be the cutoff for safety reasons

From what I gather, most remote surgery is possible today within the USA from several hundred miles away strictly using command and control systems if there is a very reliable connection available but if we are talking attempting a remote surgery from something like New York City to Hamburg, Germany (3,807 mi), then to do it reliably, it would also be a Edge computing use case to compensate for the lag. That is just my interpretation from reading a few articles on the subject.

Starrob

The vast majority of all operations are still done locally. The likelihood of having a surgical robot but not having a surgeon who can use it is very small. More likely would be to have a surgeon operating the machine locally but have the video streamed to another specialist if necessary.

It also depends on how delicate the procedure is. A hernia repair could probably be done remotely with no issues, since there isn’t as much detail needed. A nerve sparing prostatectomy would require a lot more precision. You also need to add human response time to the system. It’s still possible, given that those structures are generally fixed. So the surgeon just has to go slow. But they can operate based on what’s on their screen, not necessarily what’s real time.

For robotic heart surgery that’s a problem. The heart is beating and you need to react quickly. Sometimes it’s a matter of timing your needle throw based on the beating heart, but if the heart takes an extra beat you have milliseconds to react. AI could probably help at some point, but I suspect that’s a long way off.

I do a lot of catheter and needle based procedures. Much of it could also be done robotically since I go by information on an X-ray machine or ultrasound which have inherent delays anyway to process the images.

It might make sense for some kind of more direct relay between the surgeon and the robot, which FSLY might be better equipped to handle, with any image compression or post processing handled by their edge computing. I think it’s far enough away in the future that it’s unlikely to be significant in the investment thesis today.

6 Likes