I also don’t have too many issues with ZMs security approach. Obviously their prior customers didn’t really either.
Not to belabor the point, but some prior Zoom customers absolutely did - certainly you recall all the companies and organizations that literally banned/dropped Zoom usage. Clearly part of Yuan’s wake-up call on security.
Kindig’s excuse that others have security issues as well misses that wake-up call. It’s an Orwellian rewrite for her to say that Zoom’s security approach has always been good. It wasn’t.
Perhaps worse of all, Kindig didn’t point out that Zoom is going to be the only major video conferencing solution that offers end to end encryption for free! Considering the incorrect things she said about the competition, she probably doesn’t even realize that.
The concept of “bringing origin servers to the edge” seems reasonable.
No, it doesn’t. Kindig is mixing her terminology and after listening to her, I’m convinced she’s just throwing out buzzwords with a barely superficial understanding of what they mean. “Origin Servers” is CDN nomenclature for the singular data source of what’s getting replicated (eg, a NY Times article). No-one in the Edge Computing space talks about “origin servers” because this is edge computing, not content hosting. NetFlix uses a CDN, your Ring doorbell camera uses edge computing.
For instance, if you’re running analytics at the edge, then you’re not sourcing data from some central origin, you’re typically sourcing data from your endpoints (ie, IoT devices, cameras, automobiles, heart monitors, smart watches, etc.) and running the analytic calculations on the edge because you want a faster turn-around time to get that calculated data back to those (and perhaps other) devices, or because you want to reduce the amount of data being transmitted. In Edge Computing, the data primarily comes from the endpoint devices, not from an origin server.
Sure, there may be some centrally stored data that needs to be available at the Edge Compute location, but that’s typically not real-time data and so you don’t need your origin server at the edge - you need compute capability at the edge. And you can use a CDN as well to have any central data cached at the Edge. This is why it makes logical sense for CDN companies to expand into Edge Computing use cases.
Don’t take my word for it. Here’s Cloudflare’s take:
edge computing means running fewer processes in the cloud and moving those processes to local places, such as on a user’s computer, an IoT device, or an edge server. … The edge is a bit of a fuzzy term; for example a user’s computer or the processor inside of an IoT camera can be considered the network edge, but the user’s router, ISP, or local edge server are also considered the edge. The important takeaway is that the edge of the network is geographically close to the device, unlike origin servers and cloud servers, which can be very far from the devices they communicate with.
(https://www.cloudflare.com/learning/serverless/glossary/what… )
You’ll also note that the examples Cloudflare gives of: 1) Cameras sending video data to a cloud server considers the cameras themselves as the edge, with on-board processing for motion detection so that only images with motion are sent up to the cloud, and 2) Internal remote-office company IM not having to send chat messages to some central server on the other side of the globe and back, BOTH do not have data on origin servers. Their data comes from the endpoints. The Edge itself can either be in those endpoints (cameras) or a local compute server (IM application).
The concept of deploying your origin servers on various availability zones is an (early and big hammer) example of bringing origin servers closer to your edge.
I don’t understand why you’re bringing availability zones into this discussion. Availability zones are for failure recovery within a Region. As Amazon says: Availability Zones are distinct locations within an AWS Region that are engineered to be isolated from failures in other Availability Zones. https://docs.aws.amazon.com/AmazonElastiCache/latest/mem-ug/… Worse yet, Each region is designed to be completely isolated from the other regions.
A truly world-wide distributed cloud model is a tricky thing to implement. I don’t know of any database, for instance, in which someone in Singapore can be adding records locally while someone in California can be deleting records and everything staying instantly in sync. It seems impossible - those two “edge” locations would have to coordinate with each other, and there goes your network latency.
When Kindig talks about Amazon and Microsoft bringing origin servers to the edge - even if we let the nomenclature misuse slide, what is she talking about? Is this some future capability on which she has the inside scoop? Does she actually realize what those companies already provide?
For instance, Amazon already supports Edge Computing via their Lamda and Greengrass offerings. Lambda is Amazon’s serverless computing architecture/API, which effectively means that your applications can just make an API call to AWS to get something done without the hassle of spinning up a cloud instance. Greengrass is code that Amazon gives you which lets you run Lamda functions locally on your endpoints. With AWS IoT Greengrass, connected devices can run AWS Lambda functions, Docker containers, or both, execute predictions based on machine learning models, keep device data in sync, and communicate with other devices securely – even when not connected to the Internet.
See https://aws.amazon.com/iot/solutions/iot-edge/ and https://aws.amazon.com/greengrass/
So, you can take Lamda code you’ve written for the cloud and easily move it to your edge. But, this isn’t new. Why isn’t Amazon taking over the world with this?
Now, Kindig did mention Equinix in the same sentence as Amazon and Microsoft. I don’t get that - Equinix is a pure edge computing play and completely unlike AWS or Azure. Am I going to put words into her mouth to get her to say something reasonable like: “Amazon and Microsoft will expand their offerings to reduce latency and bandwidth requirements so some companies won’t need to turn to Edge Computing architectures. And for those that do, a pure-play Edge Computing company like Equinix looks more promising to me than CDNs expanding into that space.”
That would have been a totally reasonable thing to say. But, she didn’t say that. If you didn’t already know the offerings of those companies and how they differed you wouldn’t have been able to construe that from what she said.
Also, she’s wrong to think that the CDN players like Fastly don’t have a leg up on Edge Computing. One of the most demanding edge computing use cases is online gaming. And, as I posted a little while ago, Riot Games (maker of the dominant online game, League of Legions) actually built their own internet backbone based on architectures that Fastly developed and published! So, when she doesn’t think CDNs can play in the Edge Computing space, I believe that she doesn’t actually understand Fastly’s architecture differentiation from other CDNs and how Fastly is better set up to move into Edge Computing than the legacy “lots of POPs” CDNs like Stackpath, Akamai, etc.
BTW, if you’re interested in Edge Computing, here’s a good document from Equinix on not only their services, but about the underlying architectures: https://equinix.box.com/shared/static/ftumr3bnqq455zfnb0c3lm… In particular, it’s got some network topological diagrams that really assist in understanding what’s going on. Too bad TMF’s board software is still stuck in the 1990’s and we can’t even embed an image.