Over at SoftwareStackInvesting, Poffringa has a new detailed post on Fastly’s technology (https://softwarestackinvesting.com/fastly-edge-compute-expla… ).
It’s highly technical, but if I may, here are some perhaps easier to digest take-aways:
• Fastly was founded by Artur Bergman in 2011, who was frustrated with the existing CDN solutions available to him.
• Fastly’s Chief Product Architect recently said “Fastly has a long history of looking at problems from first principles and being unafraid to undertake difficult projects if we know they will benefit our customers.”
• Fastly took software-driven approach to the networking of their CDN, based on Arista’s (ANET) SDN switches. Using switches instead of routers was big cost savings for Fastly, and by running their own software on top, they can optimize content delivery in ways fixed hardware routers can’t.
• Fastly’s overall design is to have fewer, but larger POPs than their competitors. Not just larger, but much more efficient, with SSDs and custom file system software for performance. Some super-popular content, such as “Like” or “Share” button icons, are actually served out of memory, not disk.
• Almost all aspects of Fastly’s CDN are customer-programmable, performant, and cost-efficient with features like instant cache purging, limited deployment regions, and instant activation/deactivation.
• Compute@Edge moves computations out of central servers and closer to end-users in Fastly’s distributed network. This is a serverless environment, which essentially means customers don’t have to manually manage server process instances and only pay for what they use. Fastly’s processes start up orders of magnitude faster than anyone else, including Amazon’s Lamda, the first popular serverless computing architecture. Like Fastly’s re-thinking of CDNs, they went back to first principles for their serverless architecture and it’s unlike anyone else’s - almost all of which have a container that is started up. Those containers are heavy, in that they have all kinds of functionality, not necessarily designed for serverless. Fastly designed their processes specifically around the small kernal that starts up quickly and leaves it to the applications to bring in only what they need to run.
• Compute@Edge won’t be rolled out to everyone until 2021, so no revenue from it until then. Smorgasbord here: With select customers already building solutions in Beta, once it goes live revenue could start pretty high at day one.
• As an example of Compute@Edge, Shopify will be offering its customers the ability to do custom product discounts, beyond standard things like “buy 1, get 1 free.” Shopify’s customers will be able create their own discount rules and run them within the Shopify environment in a fast, compact, secure manner on top of Fastly. Shopify reported a speed of 1,000 requests per second.
• Poffringa warns us potential investors that the market for this kind of distributed serverless compute environment is unproven. Engineers might love it, but that doesn’t mean customers will flock to it. And if they do, Poffringa expects that the big cloud vendors will eventually modify their own existing serverless offerings in an attempt to match Fastly’s speed and security. However, Fastly likely has a big head start.
My own (Smorgasbord’s) take is that given those vendors already have a big heavy-weight architecture, they’re going to have to make some tough decisions in trying to match Fastly that their existing customer won’t like. For instance, AWS’s Lamda gives their customers a full web container with all sorts of functionality. That’s one reason it takes so long to startup. If Amazon pulls functionality, existing customer workloads won’t work without significant re-writing on the customer size, which they won’t like. If AWS chooses a whole new approach that matches Fastly, then customers will be confused as to which Amazon serverless model to choose.
• Competition includes Cloudflare’s Worker product, which is already available. Worker has a 3000-5000 micro-second startup time, compared to Fastly’s 35 micro-second startup time (and compared to Amazon’s 200,000 mico-second startup time). Worker has a smaller footprint than AWS (3MB versus 35MB), but Compute@Edge’s memory footprint is much much smaller: only several KB (1MB = 1000KB).
Definitely worth reading the whole article, even if you have to take it in chunks and skip over some technical passages.