I think it’s important to understand that AWS Lamda is not a service, and not something for which users pay for on its own. It’s a platform on which anyone can build products. As noted earlier, the platform supports serverless APIs (Application Programming Interfaces), which is actually a misnomer because there definitely are servers involved. Think of Lambda as Amazon’s serverless platform which anyone can supply applications under. It’s really not hard to understand once you cut through the techno-blabber.
Imagine running a program on your phone. You start-up an app and give it some inputs and it gives you some results/outputs. You give it some more inputs and it gives you some more output. Then maybe you kill the app or restart the phone, or send the app to the background where the OS may leave it running or kill it off for you to save battery life.
It’s the same on the cloud. In typical server computing, you start-up the application and feed it data and get results. The application stays running on the server so that you can feed it more data and get results back. Often the second request you make of the app is based on the first request’s results. But, the application stays running on the server until you manually stop it, and AWS/Azure/Google Cloud charge you as long as that server is running and consuming memory, CPU, etc.
With serverless, you don’t start-up an application first you just make an API call and get back results. You don’t know it, but behind the scenes, there’s a server instance that gets started up, run, obtains results, and then terminated. The good news is that you don’t get charged for idle time since there is no application that’s continually running. The other good news, for programmers, is that you’re just calling an API. You’re not worrying about starting up the app, keeping it running, knowing when to terminate it, etc. It’s easy to use.
The problem is that there’s overhead with making these API calls since each starts up an applicatoin, and each API call is independent of each other since each starts up a new server process behind the scenes. For calls which ingest data (like from millions of IoT devices sending data into the cloud), this is great. For running “what-ifs” on a large data set, it can be a nightmare.
It’s really not clear that Elastic supports Lambda at all which would be significant.
Elastic is not listed on the AWS Lambda support pages that I saw. If you’re using Elastic as a tool to run multiple analysis queries on large data sets, you wouldn’t want to use Lambda because each query would have to startup an ElasticSearch application on your large data set, process the query, then shut-down. It would be more efficient to start the application once, then make several calls, then shut-down.
But again, it depends on what you’re doing. Small, lightweight, and non-compute-intensive functions are better/more easily done via serverless APIs. Large, compute-heavy applications are not. I would not expect the see Elastic on Lambda unless it was to support data ingestion.