Edge serverless is an infrastructural evolution of cloud serverless in which the computing resources that deliver serverless functions are located geographically closer to the end user (I.e. at the Internet’s edge).
To understand edge serverless and why it’s important, it helps to understand serverless computing first.
Serverless computing is a misnomer as computing resources are in fact used to execute functions written by developers. The reason the term “serverless” is used is to dramatize the benefits of the new technology that does not require developers to maintain servers or instances. Developers simply add a function to their cloud provider and, when requested, the function is executed. For this reason, serverless computing is also referred to as Function as a Service (FaaS).
Functions that are deployed in a serverless manner support a variety of use cases. Developers can completely break down monolithic apps and microservices into “nanoservices” for more granular code management, or they can use serverless functions to augment an application. For instance, a StackPath customer uses our serverless edge product to improve the delivery speed of a single function. The rest of their application is executed with non-serverless computing.
To know how much of your application—and which parts of your application—can benefit from serverless computing, you must understand the advantages and disadvantages associated with it. It also helps to understand which disadvantages can be offset in an edge environment with edge serverless.
As with all types of computing, there are several advantages and disadvantages of serverless computing. Below we’ll cover the most popular.
Cost efficiency. You’re only charged when a function is executed rather than for the entire time an instance is running. For instance, customers who use our serverless edge product are charged as little as $0.60 per million function executions. In comparison, customers who use our single-core edge container product are charged $0.046 per hour—whether requests are hitting the container or not.
If a service for your application received one million requests per month and you were using the single-core edge container, you would pay about $33.12 ($0.046/hr *24 hours * 30 days). If you were using serverless edge you would only pay $0.60.
Auto-scaling. In the scenario above, you might be concerned about your single-core container running short on resources in the event of a traffic spike. With serverless, you don’t need to worry about resource shortages because scaling happens automatically.
If a traffic spike occurred, more instances would be spun up by the service provider to meet the demand. And when the traffic spike ended, the instances would be erased. With serverless, instances are ephemeral. It’s how the provider is able to allocate resources on an as-needed basis and pass the savings onto the customer. There are some caveats to these cost-efficiencies though.
Slow startup times. This is a consequence of the ephemeral nature of serverless setups and is referred to as a cold start. This Dashbird article about cold starts states that the following process must happen when a serverless instance starts from a cold state:
Steps 1 through 3 take the most time when your function is in a cold state, but when a function is warm these steps are bypassed and step 4 happens almost instantly. To keep functions warm, you can keep a pool of pre-warmed functions or use time-series forecasting to influence a “pre-warm strategy” (both outlined in the Dashbird article).
High latency. This is a consequence of slow startup times and the latent nature of cloud-based providers that process requests from databases where land is cheap instead of in locations that are close to the end user. There are a series of optimizations you can make to decrease latency related to slow startups, but there’s only one optimization you can make to direct location-related issues.
To decrease latency related to slow startup times, the Dashbird article linked to above recommends the following: choose a faster runtime, shrink package size, and monitor serverless functions. To decrease latency related to compute location, you place your serverless functions at the edge with a serverless edge product. There’s no other way around it.
Edge serverless works the same way as cloud serverless (outlined in this article) up until a serverless event is triggered. After an event is triggered on a serverless edge platform like StackPath, the following things happen:
By decreasing the distance a function must travel through intelligent routing and a multi-PoP setup, latency decreases and can even offset the performance loss that’s inherent with serverless (I.e. cold starts). Edge platforms like StackPath further decrease latency by placing PoPs in traffic-dense city hubs as opposed to rural areas where land is cheap.
A great use case for edge serverless is for an application that has a latent-sensitive microservice. A StackPath customer in one of the most latent-sensitive industries, digital advertising, found a use case for edge serverless when trying to improve the performance of a component within its real time bidding platform.
One aspect of the platform that was lightweight but extremely sensitive to latency was the cookie syncing process. Also called cookie matching, this is a data swap event between a publisher, advertiser, and any other parties that share user data.
According to this AdPushup article, the delay caused by cookie syncing is a few seconds depending on the number of parties involved in the sync. In advertising, milliseconds matter and a few seconds is too long.
To decrease the time it takes to complete the cookie sync, the media company and StackPath customer, Future PLC, completed the cookie sync with serverless scripting at the edge. By moving from a cloud-based serverless product to StackPath’s serverless edge product, Future PLC improved the user experience while cutting costs by 30%, proving that faster does not mean more expensive.