Any epiphanies stand out to you?
This may be controversial, but it appears to me there’s quite a bit of irrational exuberance around Edge Computing and Edge Networks.
There is nothing magical about the edge. Doing compute or serving up content from the edge can reduce latency. That’s it. A number of advantages being attributed to Edge Computing actually come from distributed computing, and the big cloud providers already support distributed workflows.
There was a thread here a while back on 5G and Edge Computing. 5G doesn’t demand Edge Computing. 5G itself reduces latency from endpoints to get into the network, so one way to look at it is that edge computing is needed less, not more. Sure, there may be new use cases requiring even less latency than is possible today that might be enabled with the combination of 5G and Edge Computing. But, those Use Cases may not be well described or known, and their business value maybe hasn’t been fully established. Some people have taken 5G/Edge computing too far, with examples like 1000 mile apart remote surgeries that have speed of light obstacles that require science fiction, not 5G nor Edge Computing, to overcome.
Back to the edge and latency, we have a great example of “being closer to the edge is always better” being wrong with Fastly. Akamai is arguably closest to the edge with its hundreds of thousands of POPs, yet Fastly often serves up content faster. That’s because network latency is not the only factor determining performance.
The same looking at overall performance, not just network latency, needs to be done with Edge Computing use cases. Many applications require central computing services, such as querying a central database for information. If your database is mostly static (doesn’t change often), then you can distribute the database out to the edge and then potentially gain some performance advantages by servicing the query at the edge. This would work, for instance, on authorizing users when they log-in. Their (hashed) passwords don’t change frequently, so copies can be kept and updated at all the edge servers, and when people log in from anywhere you can authorize them without the network hop to a central repository. Cool.
However, many databases do have their contents change frequently. Moving these to the edge means you’re constantly updating values from the central origin server to all the edge servers. This creates additional bandwidth charges and may actually decrease effective latency to the user, since a query to the edge may not have the actual latest information yet, and may not even know that it doesn’t. So, if you have a world-wide eCommerce store and want to show your customers what’s in stock and what isn’t, you’re probably not going to have your in-stock database at the edge. Every time someone bought something you’d have to tell all the other edges, even if no-one there was looking at those items. And if you delayed updating them, well, then the edge has to contact the central server and you’ve lost the network latency advantage.
This is why CDNs are at the edge - there is much content that doesn’t change frequently. Like a company’s JPEG logo or even an article in the NY Times. Once published, it’s rarely updated (eg only for corrections). Again, even here we see from Fastly that closest to the edge isn’t always best. A 7-11 on every corner won’t have a full stock of items, but you can’t put a Costco warehouse on every corner. Maybe a few Safeways centrally located in urban centers is more efficient.
When we look at edge computing, what are the use cases that are improved or enabled? Cloudflare, for instance, cited “image processing” as an example for their expanded serverless edge compute offering. I can imagine a case where a camera might send an image to an edge computer to be analyzed for changes from a previous image to detect movement. But, if you need to compare that image to some other image from another camera located on the other side of the country, or against some large stock of images gathered from multiple cameras and updated frequently, then you may need central computing anyway.
And even if the compute can be done at the edge, the question because whether you should push the edge to the actual endpoint itself. Apple does facial recognition right on the phone. That eliminates the network completely. Yes, that increases the compute you need at the endpoint, but there are often other solutions. For instance, Wyze (maker of cheap home surveillance cameras) just came out with a new outdoor camera that runs on a battery and so it can’t do complex computations or constant network uploading that would kill battery life. Instead of image processing to detect movement, they added a cheap PIR sensor that reduces both battery, computation, and network costs.
So, what I’m saying is that not all, heck, not even most, computing is going to move from central cloud computing to the edge, and even some that does move to the edge may move beyond the network edge to the endpoint device itself. How big is the edge computing market? No-one really knows yet.
I think we need to understand just what use cases are enabled by edge computing, understand the differences between edge computing and distributed computing, and what alternatives companies have for their compute needs. Like 5G, Edge is becoming a buzzword. I don’t invest in buzzwords. As Fastly has shown, a few additional milli-seconds of latency is not always worth eliminating.
I am bullish on companies like Fastly. I’m just trying to be realistic about what the long-term future really holds for them. They may announce blow-out earnings Wed afternoon based on their CDN offerings but I’ll be most interested to hear if they can add any color around what value their Beta customers are seeing with edge computing.