Why Fastly Can Do It

I was just going to reply to this post and say “Wow, look at what the Co-Founder of Docker said!!”, but then I started looking deeper and had an “AH-HAH” moment:
https://discussion.fool.com/Post.aspx?mid=34561158&reply=tru…

This is the part that held enough “wow” to get me to start a reply:

“from the founder and CTO of Docker, arguably the most popular container technology. “If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker. That’s how important it is. WebAssembly on the server is the future of computing.” Twitter, March 2019. This underscores the cutting-edge nature of the technology underpinnings for Fastly’s serverless edge compute solution.”

Docker is THE container solution. For them to say they wouldn’t have needed to create Docker if they had this is amazing!! I didn’t know this and Docker is a big part of the tech I touch regularly.

Allow me to take a meandering route back to what this means to Fastly…


Before I go on, please understand this is more on the systems and virtualization side of tech, which is outside of my normal stomping grounds. I expect replies like “close but here are 5 things you got wrong”…so correct me if you know better! This should get close though.


Here is a technical description of what this is all about in the first two paragraphs of:
Mozilla Announces WASI Initiative to Run Web Assembly on All Devices, Computers, Operating Systems
https://www.infoq.com/news/2019/04/wasi-wasm-system-interfac…
"WebAssembly code across all devices, machines and operating systems. The new standard, WebAssembly System Interface (WASI), defines one single conceptual operating system interface, which can be implemented by multiple, actual operating systems. At the difference of previous “Run Anywhere” efforts like Java, WASI builds on WebAssembly, a rare collaboration between browser vendors and manufacturers of chips, devices, computers and operating systems to produce a patent-free, open standard. The WASI standard will strive to provide WebAssembly’s portability and security through a modular set of standard interfaces, and to provide a solid foundation for an ecosystem. Mozilla and Fastly are already shipping prototypal WASI implementations.

WASI aims to be a system interface for the WebAssembly platform (currently implemented by the four major browser engines). WebAssembly (Wasm) describes itself as a “binary instruction format for a stack-based virtual machine”, with the design goal to “execute at native speed by taking advantage of common hardware capabilities available on a wide range of platforms”. Wasm is used as a target for compilation of high-level languages like C/C++/Rust. While WebAssembly was primarily designed to run on the open web, Mozilla seeks now to extend WebAssembly’s reach to non-web embeddings, “including everything from minimal shells for testing to full-blown application environments e.g. on servers in datacenters, on IoT devices, or mobile/desktop apps”

WebAssembly is the result of a rare collaboration between browser vendors and major companies such as Microsoft, Google, Apple, Mozilla, Intel, Samsung and more.

WASI-enabled apps can currently be run in the browser with a polyfill, or outside the browser with Mozilla’s Wasmtime, or Fastly’s Lucet.

Solomon Hikes, co-founder of Docker, says:
If WASM+WASI existed in 2008, we wouldn’t have needed to created Docker. That’s how important it is. Webassembly on the server is the future of computing. A standardized system interface was the missing link. Let’s hope WASI is up to the task!
"

Note this has more to it than the original quote. He sounds a little more skeptical/hopeful here?

My Take…

First, for context, what is Docker? Docker is a container that bundles an environment and everything you need to run some software. It is fully self-contained, so you can drop it on any system and run it without worrying about what the person has installed on their computer (or does not have installed), what versions and if they are compatible, etc, etc, etc. There are a ton of advantages to distributing something “containerized” rather than shipping a list of prerequisites (dependencies) to install and configure just so, in an environment that can NOT be predicted (someones computer or server). It is simple to hand over a container instead. What Docker is NOT is a virtual machine.

A Virtual machine is an entire computer in a virtual box. For example, a virtual machine would need Windows, or Linux or Mac OS to be fully installed. Virtual machines are virtual though, so you can have a Windows virtual machine running in a window on your Mac (VMware is a popular one you can use at home and it is how I ran Quicken for Windows when I first moved to a Mac years ago). Virtual machines take almost as much time to start up as your computer does (they are lighter, but still a full OS). That is WAY TOO SLOW for server-less use. Can you imagine clicking on a button and having to wait for a computer to start up and get you the answer? Nope.

Enter non-persistent Docker containers. A request for some processing is received through a “hook” which triggers a Docker container to spin up and run the application inside it (a “service”) and get you your answer. You get a clean environment every time too, so you don’t have to worry about junk building up over time causing problems and resources being held up. When done, it simply closes the container, which also cleans itself up. No extra things running in the background all the time. This is great, but not fast enough for a company named Fastly!

Back to that technical article quote above…

So you have this platform that was already being used to build applications in browsers, WebAssembly (Wasm). It has been shipping with all major browsers since 2017. Basically programmers write code in the language they want and then compile it to this low level binary format that is small and can run and start up efficiently. You wouldn’t want to make it directly though, which is why we use high level programming languages AND THEN also write another program, called a “compiler”, that can take the human-created code and turn it in to something like Wasm here.

Then you add this very low-level layer, WebAssembly Interface (Wasi), that sits on the machine (any device, hardware) and can run WebAssembly like “normal” software on a computer. I guess this is essentially a thin layer installed on top of existing operating systems that presents a common interface able to run any existing web app? This means any web app can run on any device or any browser without having to worry about the details?

Getting to the Point…
I have been wondering what it is exactly that makes Fastly so…well…fast? Some have speculated it is taking some old tech and doing something on top of it that combined makes a new thing…huh? Well this is where all the stuff above comes together.

  1. Fastly created Lucet, a WebAssembly compiler! And they open-sourced it. This means anyone out there can take something they wrote in any one of the many supported popular programming languages and compile it to be run on Fastly’s Cloud. The key is…

  2. Lucet has another component, not previously discussed here, called a runtime [environment]. We know now that WebAssembly can run in browsers. Fastly wrote this thing that can allow that same application/service to run in their cloud. This is the key to the speed. They do not have to spin up containers with all the extras. They just make an “instance” of the WebAssembly app/service instead. They have full control over the runtime environment and can provide all the right stuff to run the apps and control every aspect of how they start up and run!

Fastly created a computing platform that is leaner and meaner because they have full control but also provided developers the ability to use what they are familiar with and a tool to ship the result to be executed in their cloud.

A point for the technically interested is that they aren’t even multi-processing here. They are running multiple instances in the same process. The resulting reduction in overhead here has to be really impressive. Something like multi-threading perhaps…?

In their words:
https://www.fastly.com/blog/announcing-lucet-fastly-native-w…
"A major design requirement for Lucet was to be able to execute on every single request that Fastly handles. That means creating a WebAssembly instance for each of the tens of thousands of requests per second in a single process, which requires a dramatically lower runtime footprint than possible with a browser JavaScript engine. Lucet can instantiate WebAssembly modules in under 50 microseconds, with just a few kilobytes of memory overhead. By comparison, Chromium’s V8 engine takes about 5 milliseconds, and tens of megabytes of memory overhead, to instantiate JavaScript or WebAssembly programs.

With Lucet, Fastly’s edge cloud can execute tens of thousands of WebAssembly programs simultaneously, in the same process, without compromising security. The Lucet compiler and runtime work together to ensure each WebAssembly program is allowed access to only its own resources. This means that Fastly’s customers will be able to write and run programs in more common, general-purpose languages, without compromising the security and safety we’ve always offered."

Here they are comparing their performance to the web runtimes but they don’t even touch on the fact they are displacing the “regular” programs shipping and running in containers too!

Of course if you want to play with Lucet it is free and you can get it in a Docker container to make it easy to play with ;). The container will have a runtime preconfigured for you to run your program and see if it does everything you want it to before you ship it to the cloud for primetime use “out in the wild”.

"Beyond the edge cloud
We are excited to open source Lucet because of all the possibilities WebAssembly holds beyond the web browser and edge cloud. For instance, Lucet’s support for WASI is a big step towards WebAssembly programs that can run on whatever platform the user wants — in the cloud, at the edge, on the browser, or natively on your laptop or smartphone — all while keeping the same strong guarantees about security in place. We want to enable WebAssembly to thrive inside any program that allows scripting or extensions, while using fewer resources than current solutions…"


What I am not real clear on is how big of a barrier to entry it would be for another company to create a runtime to support WebAssembled services. Mozilla is an open-source foundation that is already offering one called wasmtime (here is the code: https://github.com/CraneStation/wasmtime). It is mentioned in the Mozilla announcement piece above right next to Fastly. Can’t another company grab this, drop it on some machines and start hosting apps? Help me connect the dots because questions like this come from ignorance more often then not in situations like this!

51 Likes

Rafe, thanks for this great write up.

I have no idea what 95% of it means. All I know is Fastly should report around 60% revenue growth this quarter, maybe even better and I keep seeing the win business from the most innovative companies around Amazon, Shopify, Spotify, Stripe, etc.

Imagine if they land Walmart as a big customer soon. It seems possible given the Shopify - Walmart partnership and the Walmart - Prime day competitor.

revenue from Shopify, Amazon, and Spotify alone is probably enough to fuel this company to a 2x - 3x from here alone.

Management says their performance is better than others and the deals they continue to win seem to back that up. Good enough for me.

Long since $20/share

13 Likes

If I may, let me try to boil this down into more digestible and investing-relate chunks:

• Virtual Machines (VMs) and Containers (eg Docker) were invented as ways to make writing applications easier. VMs isolate your application from the underlying hardware/OS, and Containers go further by incorporating everything you need in a tidy package. However, that convenience and stability come at a performance price, especially with start-up time, but also with cloud computing charges since the whole process/environment is always running.

serverless computing was invented by Amazon (AWS “Lamda”). This is actually the opposite of containers - the operating environment is moved out of your application, which now consists of a set of API calls to the cloud. In a typical computer “everything old is new again” manner, Lamda just lets people program on the cloud the way they used to program on a personal computer - via individual APIs provided by an operating environment. API programming was great for desktop/laptop applications, but initially impractical for the cloud. Lamda was the first to make it work.

• But, the ugly truth behind serverless is that there’s not just a server running your application, but that server has to be created and started up for each and every individual API call you make. Still, the trade-offs are often worth it, especially for applications that need to dynamically scale and aren’t compute-intensive (like IoT data gathering).

What Fastly has done with Lucent and Terrarium is to create a serverless environment in which the server can be started up orders of magnitude faster than anyone else. Since this is done for each and every API call, this is huge.

The bolded part is all you really need to know. If you have an app that runs on AWS Lamda now, moving it to Fastly will increase its performance. More importantly, if you have an app that you want to run serverless, but haven’t because the cloud couldn’t keep up, Fastly’s cloud will enable it. This is potential new business that no-one else support today. Finally, remember that this is part of Fastly’s Compute@Edge, so it’s not on the market yet.

If you care, the background on this is that Fastly took WebAssembly, an open-source technology for web browsers designed to replace slow and insecure JavaScript, and re-purposed it in a completely new way for edge computing. Here’s Fastly’s perspective (from Rafe’s link) https://www.fastly.com/blog/announcing-lucet-fastly-native-w…
Lucet is designed to take WebAssembly beyond the browser, and build a platform for faster, safer execution on Fastly’s edge cloud. WebAssembly is already supported by many languages including Rust, TypeScript, C, and C++, and many more have WebAssembly support in development. We want to enable our customers to go beyond Fastly VCL and move even more logic to the edge, and use any language they choose.

That “any language they choose” part is actually pretty important. It enables easier migration to this environment, as well as opening it up to developers who already have a favorite language.

Note that while Lucent (the compiler and runtime environment) is open-sourced, Terrarium, the edge-computing platform based on WebAssembly, is private to Fastly. I don’t know the details, but I suspect Fastly is protecting its secret sauce while at the same time encouraging others to hop on its bandwagon. In the computing world, developers avoid proprietary solutions, they want standards to avoid lock-in. Fastly apparently intends to achieve lock-in by delivering the best solution and so making the compiler and environment open does that.

51 Likes

Thanks for cleaning that up Smorgasbord1.

Re: “serverless computing was invented by Amazon (AWS “Lamda”). This is actually the opposite of containers”

I took serverless computing to mean a server isn’t running all the time. A lambda function (or some orchestration system) triggers a service (a small server that handles a request(?)) to start up, do some work, then go away.

AWS seems to have a number of products for working with containers (https://aws.amazon.com/containers/services) including Fargate (https://aws.amazon.com/fargate). The Fargate description sounds a lot like Fastly’s approach except using containers, which means a lot more overhead (the old way, with slow startup times). Here is the full description so you don’t have to bother with the links if you don’t want to:

AWS Fargate is a serverless compute engine for containers that works with both Amazon Elastic Container Service (ECS) and Amazon Elastic Kubernetes Service (EKS). Fargate makes it easy for you to focus on building your applications. Fargate removes the need to provision and manage servers, lets you specify and pay for resources per application, and improves security through application isolation by design.

Fargate allocates the right amount of compute, eliminating the need to choose instances and scale cluster capacity. You only pay for the resources required to run your containers, so there is no over-provisioning and paying for additional servers. Fargate runs each task or pod in its own kernel providing the tasks and pods their own isolated compute environment. This enables your application to have workload isolation and improved security by design. This is why customers such as Vanguard, Accenture, Foursquare, and Ancestry have chosen to run their mission critical applications on Fargate.

Again this is not my area of expertise so I may be suffering from a little knowledge being a dangerous thing. I’ve used lambda on a side project, but didn’t get in to this area at all. Clarification welcome!

1 Like

I took serverless computing to mean a server isn’t running all the time. A lambda function (or some orchestration system) triggers a service (a small server that handles a request(?)) to start up, do some work, then go away.

Yes, except I’m not following the “orchestration system” part.

The Fargate description sounds a lot like Fastly’s approach except using containers, which means a lot more overhead (the old way, with slow startup times).

I suppose you can look at it that way, with the difference being that Fastly can start up a WebAssembly “container” instance way faster than anyone can start up any Docker instance. But, Fargate isn’t about speed, it’s about convenience. It makes managing Docker instances easier since AWS handles all that for you.

I think the important aspect to consider is the Edge in Edge Computing. Lamda and Fargate run on AWS, which is not an edge computing environment. AWS is a great cloud environment, but even Amazon chose Fastly to be the CDN for IMDB, for instance.

The initial idea behind edge computing is to reduce network latency. By not going all the way to a central cloud server, you may save valuable micro-seconds running it on an edge computer instead. Competitor StackPath understands this: https://blog.stackpath.com/edge-serverless-vs-cloud-serverle…

But, Fastly takes it even further by having the fastest startup time for the process and compiling ahead of time for additional performance. So, you save both latency and processing time. And, you can write in the language of your choice. Tastes great AND less filling!

11 Likes

“from the founder and CTO of Docker, arguably the most popular container technology. “If WASM+WASI existed in 2008, we wouldn’t have needed to create Docker. That’s how important it is. WebAssembly on the server is the future of computing.” Twitter, March 2019. This underscores the cutting-edge nature of the technology underpinnings for Fastly’s serverless edge compute solution.”

Thank you Rafes for explaining the significance of this quote. I read Poffringer’s writeup and the quote stood out as something very important but since I am not a techie I could not fathom the meaning. It seemed however that this represented a major advantage for FSLY.

Your writeup makes it clear that this is indeed the case. I didn’t understand all of it but enough to leave me convinced.

Much appreciated.

Cheers

I noticed the first link in my post takes you to a reply page. Here is the correct link without that reply part: https://discussion.fool.com/Post.aspx?mid=34561158. Thanks again to WillO2028 for that post.

And since I am here anyway with something important, I’ll throw in this humorous cautionary note:

“Your writeup makes it clear that this is indeed the case. I didn’t understand all of it but enough to leave me convinced.”

You not understanding all of it just hid my own not understanding all of it ;).

Joking aside I do understand enough of it to feel like my ah-hah moment was genuine. That said, do not miss the very last paragraph of my post. It is a big question. For now I trust the company’s strategic positioning and network to be the differentiating factor and so am not too worried in reality.

1 Like

Wow really great thread gear heads Rafes and Smorg! So the FSLY edge hosts have execution secret sauce while allowing developers to be cloud/CDN agnostic in their code. Very cool. A major challenge with writing code to operate in a cloud is making it agnostic and scale/perform. The cloud services like Amazon AWS, Google, Azure, etc. try hard to lock you into their platform. And then bleed you with monthly escalating service bills.

FSLY of course will try and bleed you with monthly service bills. (good for us investors) But allowing WebAssembly binaries to execute closer to the endpoint app and to be cloud agnostic is a safe business decision to make for the developer. You can migrate your binaries at any time FSLY does not give you the turbo charge price/performance you expect. (maybe good or not good for us investors but cuts deployment/deployment friction)

To me a big future TAM driver for FSLY is the IoT world that is now emerging and about to be supercharged from the vast 5G rollout. There does not seem to be enough discussion on this board about this in regards to the impact to FSLY. IoTs frequently do not have a lot of CPU/memory resources. Going back to a distant cloud for processing will be a challenge for many IoT or mobile apps. Here again being agnostic, portable, and scalable on the nearby edge hosts with the WASM binaries seems really big to me. This is making me more of a FSLY believer. Thx.

-zane

8 Likes