Cloudflare's Edge Computing & VaporIO

I am creating this thread to discuss the idea of actually moving physical servers nearer to the end user.

It seems Cloudflare, as of right now, appears to have a true “Edge” product and what I mean by that is by physically moving closer to the end user (as close as 10 to 20 km) using Micro data centers and they are doing this by aligning with Vapor IO:

Vapor IO announced an additional investment by private equity firm Berkshire Partners and Crown Castle, bringing its aggregate Series C funding to $90 million and providing the means to accelerate its Kinetic Edge buildout.

The rollout appears timely for the cellular industry as operators deploy 5G and edge capabilities. Vapor IO’s mini data centers sites can be used to support these and other operators.

The company now says it has the funds to build out its platform in 36 markets by 2021.

Read More: https://www.fiercewireless.com/wireless/crown-castle-backed-…

The Kinetic Edge product:

The core concept of the Kinetic Edge is to take the power and resources of the centralized cloud, but position them within 10 to 20 km of its end users. Vapor IO achieves this using infrastructure edge computing, where micro data centers are positioned on the operator side of the last mile network. These edge data centers contain enough resources in a local area to provide the flexibility and power of the cloud, with the locality and low latency of the edge.

https://www.vapor.io/kinetic-edge/

Vapor IO intends to cluster these minidatacenters to increase the coverage area and they mention another reason for clustered minidata centers over a traditional centralized data centers and that is reliability:

The power, redundancy and flexibility of the Kinetic Edge continues to grow as more Kinetic Edge sites are deployed in an area. With three or more Kinetic Edge sites, not only does the redundancy of the Kinetic Edge continue to grow, but so do the number of availability zones and the power of the edge computing resources in the area, all joined by the Kinetic Edge Fabric.

With only six Kinetic Edge sites in a metropolitan area, the Kinetic Edge makes it possible to exceed the level of reliability of a tier 4 data center, providing twelve nines of reliability (99.9999999999% uptime) which far exceeds what traditional centralized data centers can provide. With critical applications emerging at the edge, such as autonomous driving, this level of reliability isn’t optional; it’s required to keep mission-critical use cases operating safely.

Read More: https://www.vapor.io/kinetic-edge/

CloudFlare is aligned with VaporIO and goes wherever VaporIO will go:

Vapor IO’s partnership with Cloudflare, specifically, will “unleash a new class of edge-to-core applications, giving developers the power to run serverless JavaScript applications on the Kinetic Edge platform, in close proximity to end users and devices, using Cloudlfare Workers® as well as extending Cloudflare’s core CDN, security, and workload services even farther out at the edge.”

Read More: https://news.crunchbase.com/news/with-cloudflare-as-a-custom…

It appears that Cloudflare has a true “Edge” computing strategy of moving closer to the end user not simply by building another centralized POP but by partnering with VaporIO which intends to build Micro data centers in 36 markets by 2021. It appears VaporIO has intentions of clustering these Micro data centers locally for increased coverage and reliability.

The idea of using Micro data centers centers is discussed here:

The death of the corporate data center may be a bit premature. Yes, enterprises are going cloud, but the data center could become decentralized via “micro data centers.”

Micro data centers are essentially exactly what they sound like. They use racks and a much smaller footprint. Some of these micro data centers sit in cases that look like gun lockers. Others are mini racks with integrated systems. At a Schneider Electric conference in New York City this week, there was one mini data center design that sat in what looked like an entertainment center. Yes folks, in the future there may be a data center sitting in the middle of a store or bank branch.

Read More: https://www.zdnet.com/article/whats-next-for-data-centers-th…

VaporIO is not the only micro data center company either. This article lists 9 startups:

Startups focused on micro data centers could fill a void created by growing demand to process IoT data closer to the network edge.

Data-hungry technology trends such as IoT, smart vehicles, drone deliveries and smart cities are increasing the demand for fast, always-on edge computing. One solution that has emerged to bring the network closer to the applications generating and end users consuming that data is the micro data center.

Now, with the arrival of 5G, the demand for edge data centers could be ready to explode. In anticipation, several of these startups intend to drop micro data centers at the base of every 5G tower they can gain access to.

Read More: https://www.networkworld.com/article/3445382/10-hot-micro-da…

The 9 micro data center startups mentioned in the article are:

Axellio

Compass Datacenters

DartPoints

EdgeConneX

EdgeMicro

MetroEdge

ScaleMatrix

Vapor IO - which is aligned with Cloudflare according to published reports.

If micro data center use does in fact decrease latency to the point in which major corporations start using the concept for things like IoT and other Edge use cases that might require extremely low latency beyond what the fastest public CDN services can provide, then I wonder how long it will take a company like Fastly to adopt a similar strategy of using micro data centers to lower latency? Something to think about.

Starrob

33 Likes

If micro data center use does in fact decrease latency to the point in which major corporations start using the concept for things like IoT and other Edge use cases that might require extremely low latency beyond what the fastest public CDN services can provide, then I wonder how long it will take a company like Fastly to adopt a similar strategy of using micro data centers to lower latency? Something to think about.

As I posted here earlier (https://discussion.fool.com/i39m-one-of-those-nerds-some-popular… ), Fastly’s CDN architecture uses fewer, but more highly performant servers (using lots of RAM and SSD drives) to overcome a few milliseconds of additional latency with much higher cache hit rates, faster content access, the ability to handle dynamic content, and super-fast updating of content across the network. Fastly themselves advertises that the legacy CDN strategy of using lots of smaller data centers is outdated in today’s higher internet speed world.

So, it’s crazy to think that Fastly will completely revert the architectural direction of their CDN offerings to the old way of doing things because some entrenched competition is doubling down on the legacy method. It would be like Tesla putting gas engines into their pure-EVs now because BMW just released a new hybrid car (which they actually just did, the 530e). Not going to happen.

I suppose we could have a discussion of micro data centers, what “the edge” means to different applications, how 5G and latency matter, etc., but since this has already started out with a conflation of at least 3 different Use Cases: Content Delivery, Compute Serving and IoT, it’ll be too hard to unravel and without much relevance to investing in high growth companies.

12 Likes

Fastly’s CDN architecture uses fewer, but more highly performant servers (using lots of RAM and SSD drives) to overcome a few milliseconds of additional latency with much higher cache hit rates, faster content access, the ability to handle dynamic content, and super-fast updating of content across the network. Fastly themselves advertises that the legacy CDN strategy of using lots of smaller data centers is outdated in today’s higher internet speed world.

So, it’s crazy to think that Fastly will completely revert the architectural direction of their CDN offerings to the old way of doing things because some entrenched competition is doubling down on the legacy method. It would be like Tesla putting gas engines into their pure-EVs now because BMW just released a new hybrid car (which they actually just did, the 530e). Not going to happen.

Thanks smorg, you explained it so that even a non-techie like me could understand it.
Saul

So, it’s crazy to think that Fastly will completely revert the architectural direction of their CDN offerings to the old way of doing things because some entrenched competition is doubling down on the legacy method. It would be like Tesla putting gas engines into their pure-EVs now because BMW just released a new hybrid car (which they actually just did, the 530e). Not going to happen.

Smorgasborg

I say that Edge computing…meaning putting compute close to the end user will be the likely way things trend.

I will simply give one example of a application that will require compute and storage very close to the end user and that is VR & AR. According to experts, AR/VR applications must have a latency rate of 20 ms to work but that a latency of 7 ms or less is even better. That is mentioned on this article on the subject:

To get that type of VR-quality latency, operators will need to move the compute power closer to the network edge. According to Mo Katibeh, CMO of AT&T Business, AT&T has seen latency rates below 10 milliseconds (ms) in its fixed 5G trials. (Experts say AR/VR applications must have a latency rate of 20 ms to work but that a latency of 7 ms or less is even better) But he adds that edge computing will help reduce latency even further. “Putting compute power closer to where it’s needed while simultaneously increasing network speeds with mobile 5G has the capacity materially to reduce latency and be transformational for business,” he said.

Read More: https://www.lightreading.com/mobile/5g/atandt-verizon-hope-5…

Fastly has low latency but not ultra low latency on the 10ms level. Fastly’s latency from my home is median 29 ms, fastest 21 ms, slowest 38 ms.

People can experiment for themselves, here is a website to measure Latency to Fastly: https://cloudharmony.com/speedtest-for-fastly:cdn

I also measured Cloudflare here: https://cloudharmony.com/speedtest-for-cloudflare:cdn

Cloudflare’s latency from my home is median 29.5 ms, fastest 26 ms, slowest 41 ms

I also measured Akamai here: http://cloudharmony.com/speedtest-for-akamai

Akamai latency from my home is median 27.5 ms, fastest 20 ms, slowest 48 ms

All of these latencies are comparable but they all are a fail for VR-quality latency.

So, I am not currently believing very much in the idea of Fastly’s network being viable for the following Edge use cases, in which many require ultra low latency or save bandwidth or others reasons:

https://www.zdnet.com/article/10-scenarios-where-edge-comput…

Or How about this:

IoT Gateways

Edge Computing and the Internet of Things (IoT) go hand-in-hand. With the explosion of new connected devices, everything from your car to your toaster now has an IP address. These new devices are producing a lot of data. So much data that your limited Internet uplink can’t keep up.

Connected devices can consume less backhaul bandwidth by processing the majority of that data at the Edge in an IoT Gateway close to the source, rather than in the Cloud. And should the uplink go down, the IoT gateway can continue to function so you’re not stuck in the dark when your connected light switch and your connected light bulb lose their connections to the Cloud and each other.

Read More: https://medium.com/@mfcaulfield/edge-computing-9-killer-use-…

So, Yes…I can think of plenty of reasons why companies would want compute closer to the user rather than using Fastly’s public CDN which at this stage I am not sure how much it would disadvantage Fastly or not by failing to compete on the Edge.

Starrob

8 Likes

Starrob: I will simply give one example of a application that will require compute and storage very close to the end user and that is VR & AR.

I previously pointed out that since Starrob has already conflated 3 different Use Cases: Content Delivery, Compute Serving and IoT, it’ll be too hard to unravel and without much relevance to investing in high growth companies.

And now it’s 4 Use Cases.

Applying the Fastly’s network latency, designed for a content delivery application to a VR/AR application is like applying a UPS van’s acceleration to an Indy car race. Different technologies for different applications. The race car can literally run circles around a UPS van on the track, but it would be a really slow way to deliver that BBQ from Amazon I just ordered.

3 Likes

And now it’s 4 Use Cases.

Smorgasbord1

There are far more than 4 use cases that require low latency. I only mentioned a few of them. Edge computing solves various problems on different levels that Fastly’s current Network might not ever solve.

For instance, last mile bottlenecks:

There’s also the problem of the “last mile” bottleneck, in which data must be routed through local network connections before reaching its final destination. Depending upon the quality of these connections, the “last mile” can add anywhere between 10 to 65 milliseconds of latency.

https://www.vxchnge.com/blog/the-5-best-benefits-of-edge-com…

Maybe part of the reason that I can’t get 10 ms on the Fastly test is because I am a Comcast customer and Comcast is jamming things up. I didn’t mention that the first time I did the test I got really wild numbers…like my slowest latency to Fastly was like 500 ms and that is probably on Comcast.

How will the problem of local bottlenecks be solved, especially as more IoT devices flood online and possibly create even worse internet traffic jams with a veritable flood of data?

Starrob

4 Likes

If I may add a bit of historical perspective, during my long years in the business the focus has shifted back and forth between core and edge depending on the relative ascendency of the various technologies that make up the system.

I develop websites that I host both on a shared server in California and on my laptop. I remember one time when I was getting results faster from the remote server than from my laptop which was an older machine. At other times the Wi-Fi, which is a party line, is so congested that the laptop is faster.

I was a bit wary about the edge claim until I read Cloudflare’s explanation (see below). In the case of data collected by IoT devices a lot of the data has no new information and it was filtering out the garbage at the edge to cut bandwidth usage that justifies the edge servers.

As the saying goes, your milage may vary…

Denny Schlesinger

What is edge computing?

Edge computing is a networking philosophy focused on bringing computing as close to the source of data as possible in order to reduce latency and bandwidth use. In simpler terms, edge computing means running fewer processes in the cloud and moving those processes to local places, such as on a user’s computer, an IoT device, or an edge server. Bringing computation to the network’s edge minimizes the amount of long-distance communication that has to happen between a client and server.

[snip]

Consider a building secured with dozens of high-definition IoT video cameras. These are ‘dumb’ cameras that simply output a raw video signal and continuously stream that signal to a cloud server. On the cloud server, the video output from all the cameras is put through a motion-detection application to ensure that only clips featuring activity are saved to the server’s database. This means there is a constant and significant strain on the building’s Internet infrastructure, as significant bandwidth gets consumed by the high volume of video footage being transferred. Additionally, there is very heavy load on the cloud server that has to process the video footage from all the cameras simultaneously.

https://www.cloudflare.com/learning/serverless/glossary/what…

4 Likes

I was a bit wary about the edge claim until I read Cloudflare’s explanation (see below). In the case of data collected by IoT devices a lot of the data has no new information and it was filtering out the garbage at the edge to cut bandwidth usage that justifies the edge servers.

captainccs

After a lot of reading, the two biggest advantages that I have seen written about Edge Computing is lower latency and lower bandwith usage which can be extremely important as it is expected that there will be more than to 21 billion IoT devices online by 2025:

1. By 2025, it is estimated that there will be more than to 21 billion IoT devices

A quick look back shows where IoT devices are going. Consider: In 2016, there were more than 4.7 billion things connected to the internet, according to IOT Analytics. Fast-forward to 2021? The market will increase to nearly 11.6 billion IoT devices.

Read More: https://us.norton.com/internetsecurity-iot-5-predictions-for…

Starrob

1 Like

Fastly has low latency but not ultra low latency on the 10ms level. Fastly’s latency from my home is median 29 ms, fastest 21 ms, slowest 38 ms.

Fastly claims that their servers can start-up in 35 microseconds which is 100 to 1000 times faster than competing serverless solutions from AWS Lambda, CloudFlare etc. which are in the milliseconds. Note that this low latency is only for their Computing@Edge product which was released 11/19. This is still in beta test stage and will be sold in 2021.

So, why did your tests show comparable results for FSLY, AKAMAI? My guess is that they are not using FSLY’s Computing@Edge product.

I highly recommend Peter Offringa’s article to understand FSLY and its competition.

https://softwarestackinvesting.com/fastly-fsly-stock-review/…

The following is also a good read.

https://www.cmlviz.com/2019/11/12/FSLY/fastly-nyse-fsly-is-a…

6 Likes

So, why did your tests show comparable results for FSLY, AKAMAI? My guess is that they are not using FSLY’s Computing@Edge product.

Texmex

Usage or no usage of Computing@Edge has probably less to do with it than the ISP does.

This is a picture of where a CDN normally sits between a end user device and the internet backbone. The CDN is the red dot:

https://www.cloudflare.com/img/learning/cdn/glossary/edge-se…

So when a web browser makes a request for a website from a computer, the computer sends out the request over the ISP, the CDN has the website data in a cache and sends the website data back over the ISP to the computer. What a CDN does is save time by making sure the data request and the data do not have to travel over the internet backbone.

Things like Compute@Edge primarily affects the time when the request for data is first received to when the data is sent back to the local network and then to the computer…but look what sits in between the CDN and the Local Network…that’s right the ISP. My ISP is Comcast and let’s say I perform this test when Comcast has huge internet traffic jams slowing everything down…Well Fastly can be as Fast as they want but if the ISP is slow then that can drastically effect my latency and what the test says the latency is.

If a ISP has inconsistent latency…then the latency seen on my computer tests will also be inconsistent. That is why Edge Computing…meaning placing compute and storage Between the ISP and the local network has many proponents…Edge Computing avoids any slowness, inconsistencies or to a limited extent outages on the part of the ISP. If some application requires a consistent 20 ms latency and the ISP is inconsistent in their latency, it does not matter what Fastly does.

Well, someone might say, why doesn’t Fastly cut the ISP out of the picture…yeah…that is what I say and that would be the beginning of Fastly becoming a Edge computing player, in my opinion.

Starrob

1 Like

If a ISP has inconsistent latency…then the latency seen on my computer tests will also be inconsistent. That is why Edge Computing…meaning placing compute and storage Between the ISP and the local network has many proponents…Edge Computing avoids any slowness, inconsistencies or to a limited extent outages on the part of the ISP. If some application requires a consistent 20 ms latency and the ISP is inconsistent in their latency, it does not matter what Fastly does.

Starrob

Let me expand on that. A additional source of latency might also come from distance from where the CDN connects to the ISP to the end user. For instance, if one CDN connects to Comcast five hundred miles away and another CDN connects to Comcast only ten miles away from the end user then the CDN that is nearest gains a slight latency advantage over the CDN that connect further away because simply the way Comcast has structured their architecture, the data might have to go through more hoops, traffic jams and delays than a connection only 10 miles away.

So, I asked before…Why not skip the ISP and set up a proxy server and cache in the connection between the ISP and local network? Well, that would be more expensive and I don’t think Fastly will do that for every business customer. Fastly might do things like that for major corporations with the ability to pay and the right use case. The Fastly product that does skip the ISP is called the Managed CDN product: https://www.fastly.com/solutions/managed-cdn

Fastly’s Managed CDN product looks like this: https://www.fastly.com/cimages/6pk8mg3yh2ee/dghTYFRDeocewwqA…

What that looks like is Fastly Managed POP sitting between the ISP and a company’s network. It goes company Network, Fastly Managed POP (where the cache sits), ISP, Fastly’s public CDN.

Fastly has a limited amount of customers doing that and that is a commercial service, not a consumer service. How big Fastly will make that business is a open question.

Also like I stated before, the only thing Fastly would need to do to become a Edge Compute company is add Compute to the Managed POP. Some people say Fastly will never do that. I say it remains a possibility.

Starrob

1 Like

Thanks Telmex,

I love this quote, in the second article, be Fastly CEO.

“On the revenue side, the forecast that we have given in the past has not included the additional edge compute revenue from this new version of our edge compute, the Compute@Edge, and we aren’t ready yet to give guidance on what that will do, but it’s not been baked in into the previous numbers.”

Fastly claims that their servers can start-up in 35 microseconds which is 100 to 1000 times faster than competing serverless solutions from AWS Lambda, CloudFlare etc. which are in the milliseconds. Note that this low latency is only for their Computing@Edge product which was released 11/19. This is still in beta test stage and will be sold in 2021.

So, why did your tests show comparable results for FSLY, AKAMAI? My guess is that they are not using FSLY’s Computing@Edge product.

Yeah, as I’ve been saying, we need to be really careful about which Use Cases and Applications we’re talking about - and so far this thread has muddled the applications up.

Fastly says their serverless processes start-up in 35 micro-seconds. This has nothing to do with their CDN offerings, for which I assume the processes are always running for best performance. Note that Fastly’s compute services are still in private Beta. CloudHarmony does not connect to them.

If you wanted to compare the network latency of different cloud compute services, this page: https://cloudharmony.com/network-3m-for-compute-from-ripe will do the trick. Where I am, in Silicon Valley, AWS was in the top 5, proving that the latency gains from dedicated edge server locations are often overstated. Just use AWS Lambda and be happy.

If you wanted to compare the network latency of different CDN services, this page: https://cloudharmony.com/network-3m-from-ripe will suffice. Here, for me, Akamai comes in second place a 9ms, with CloudFlare in third at 12ms, and Fastly in 11th place at 20ms.

But, it’s important to note that these are only network latency comparisons, not overall performance. Fastly can easily overcome an 11ms disadvantage with their more efficient POP servers. For instance, if an Akamai POP doeesn’t have the page you want in its cache, it’s got to back to the provider’s origin, which can easily take hundreds of ms. Since Fastly’s POPs have larger caches AND the caches are stored on fast SSDs, page loading performance from Fastly will often be faster.

3 Likes

Let me expand on that.

Please don’t. It’s one thing to use Googling to learn about technology, it’s another thing to post incomplete and inaccurate learnings as if you actually understand the technology. Between here and the premium Fastly board it’s taken a lot of effort to correct Starrob’s frequent misstatements. For instance:

The Fastly product that does skip the ISP is called the Managed CDN product: https://www.fastly.com/solutions/managed-cdn

Fastly does not have a product that “skips the ISP.” The purpose of the Managed CDN is designed to provide full visibility, control and robust security oversight while dramatically reducing delivery and operations costs and improving user experience , not “skipping the ISP.”
https://www.fastly.com/press/press-releases/introducing-fast…

Matter of fact, there is no way to “skip the ISP” because that’s the last mile to users. Even Riot Games, when they set out to reduce latency was unable to “skip the ISP.” You can read about what Riot did here: https://technology.riotgames.com/news/fixing-internet-real-t… and can see in the first diagram that the ISP is still very much in the picture.

Riot built their own internet backbone. Interestingly, Riot themselves leveraged learnings that Fastly posted about their own Fastly Network. Riot’s page referencing Fastly is here: https://technology.riotgames.com/news/fixing-internet-real-t… and the Fastly blog page they reference is here: https://www.fastly.com/blog/building-and-scaling-fastly-netw…

As you can see, this gets detailed very quickly. Suffice it to say that an analogy for building your own network is with taking a Greyhound bus versus your own dedicated rail system. Greyhound buses want to get as many passengers as they can, so they have multiple stops. Even if you can get a direct route, the buses still have to drive over public highways, which are not the shortest distance between two points and have traffic and traffic lights. With a dedicated train system, you build the tracks (lots of land negotiations) and stations and then run the trains. Riot leased their own fiber and optical wave connections, developed their own routing software, etc. But, they still have the ISP for the last mile. Of course, that can’t be “skipped.”

Interestingly, what Riot did was partially based on what Fastly had already done for their CDN offering. Fastly wanted to bypass routers and go directly through switches. Luckily, Arista (ANET) had introduced their SDN product, supported by APIs, thus enabling Fastly to do what they needed (see the Fastly link above or this video presentation: https://vimeo.com/132842124 )

Starrob has also confused what legacy CDN companies were saying about “the edge” and misunderstanding how Fastly’s different architecture could be superior. I had to point out that Fastly was aware of the potential for this confusion when they wrote:
Legacy CDN vendors are in a tough spot. They’ve all built their architectures based on the conditions that were prevalent 15 to 20 years ago… However, these vendors are now stuck with hundreds of thousands of small, disparate servers around the world … Upgrading all of these smaller POPs is practically impossible, so legacy vendors have resorted to positioning this limitation of their networks as a strength.
https://www.fastly.com/blog/why-having-more-pops-isnt-always…

When the Application we’re discussing shifts from Content Delivery to something else, whether it be compute serving, IoT, VR/AR, then we need to recognize that the relationship and relative importance of different technologies comes into play. Starrob is locked into an “edge location reduces latency” mindset, which just isn’t appropriate for all applications.

It’s virtually impossible for laymen to delve into the networking realm and provide useful prognostications about what companies should do in terms of networking topology. It’s wrong to insist that Fastly needs to put more POPs closer to users or even that content companies will choose the Managed CDN product in order to put more POPs closer to their users. It’s clear that Fastly understands the trade-offs in networking performance really really well, as one of the most successful gaming companies that relies on having the lowest latency possible learned from Fastly how to make their own network backbone and make it fast.

18 Likes

Smorg thanks for your knowledge and experience. I appreciate your posts and understand you are an Expert in all aspects of the Internet. But sometimes you just have to let it go, especially when people just want to argue.

The Fastly product that does skip the ISP is called the Managed CDN product: https://www.fastly.com/solutions/managed-cdn

SMH.

Andy

4 Likes

Smorg thanks for your knowledge and experience. I appreciate your posts and understand you are an Expert in all aspects of the Internet. But sometimes you just have to let it go, especially when people just want to argue.

Thanks, but I’m not nearly as expert as you suggest, especially with that capital “E.” I make many mistakes, both in investing and in technical explanations. I’m sure there are people on this board who know networking far better than I. They just probably have better things to do with their time, and if they did wade in would be able to point out inaccuracies in posts I’ve made.

A little bit of knowledge can be a dangerous thing. Sometimes understanding the technology isn’t helpful for investing, and this thread probably reached that milestone early on. Sorry about that, but I do think it was valuable to know that Riot Games had Fastly as their low latency networking inspiration, and even that Arista provided the SDN technology that enabled Fastly in the first place. Companies like these with solid technologies are not always the best investments, but technology companies without such a solid footing are rarely good investments.

2 Likes

Thanks, but I’m not nearly as expert as you suggest, especially with that capital “E.” I make many mistakes, both in investing and in technical explanations. I’m sure there are people on this board who know networking far better than I. They just probably have better things to do with their time, and if they did wade in would be able to point out inaccuracies in posts I’ve made.

Smorgasbord1

Experts on networking should speak up.

Some people interpret back and forth as something undesirable but I look at it as a way to learn. I put my thoughts out there for a number of reasons, among them being whether my ideas can stand up to scrutiny by others, another reason is to refine how I think.

Secondly, expertise does not always mean a person is right. Expertise’s strength is that the expert knows the fine details on how things operate. Expertise’s weakness is that “Experts” can become myopic and become so focused on details (sometimes the details that ignore the bigger picture) that they have a harder time thinking outside the box and think of different approaches to solving a problem within a industry or within a industry sub-segment. Sometimes, the best ideas come from outside the circle of experts

A metaphor for this is in the seaborne shipping industry, for a long time almost everything was transported via bulk shipping. It took someone from outside the “expertise” of seaborne shipping to develop a much better way to transport products, by developing shipping containers and containers came from the railroad industry: https://en.wikipedia.org/wiki/Malcom_McLean

I am certain, that “experts” initially told Malcom McLean when he first proposed seaborne shipping containers that “It is just not the way things are done”, as all new ideas seem to get resistance.

So squelching ideas from outside a small expertise circle is not always the best thing because sometimes the very best ideas come from outside a circle of expertise. Sometimes it is wise to ask, “Is that a better idea?”

It is just my opinion but I believe Fastly could very well go in the direction of servicing clients that desire specific use cases that require ultra-low latency. Take note that I never said one word about Fastly putting POPs all over the place as far as the eye can see. I never suggested such a thing.

What I am suggesting is that for clients that desire specific end use cases that require low latency, that Fastly can provide deploy a edge cloud platform on dedicated POPs within a company’s private network at locations of the company’s own choosing. For those that are unaware, that product is listed on Fastly’s website and already has customers:

Fastly’s Managed CDN provides maximum control and flexibility. We deploy our edge cloud platform on dedicated POPs within your private network at locations of your choosing. Our service can be used exclusively, or as part of a hybrid, multi-CDN strategy.

Read More: https://www.fastly.com/solutions/managed-cdn

Starrob

2 Likes

THIS HAS TURNED INTO A BACK-AND-FORTH PERSONAL ARGUMENT. IT’S TIME TO END IT OR TAKE IT OFF BOARD!
SAUL

16 Likes