Edge Academy

Definition

Serverless is a method for executing functions and running cloud compute services on an as-needed basis.

Overview

Serverless is a misnomer in the sense that servers are still used by cloud service providers to execute code for developers. The reason “less” is appended is because developers spend less time on backend development and more time writing application code. After code is pushed, servers scale on-demand and are fully managed by the cloud service provider.

Companies also spend less money on cloud services when serverless is used. Instead of paying a flat rate for virtual machine (VM) instances that are always under- or over-utilized, the company only pays the cloud service provider for time that their code is actually executed. Time is measured in the form of requests, often billed per million requests. For instance, StackPath charges around $0.60 per million requests for its serverless product.

To date, serverless is the most scalable and cost-effective method for cloud computing and edge computing.

How Serverless Works

Developers rely on serverless to execute specific functions. Because of this, cloud service providers offer Functions as a Service (FaaS). Below, you can see how functions are written and executed in a serverless way.

  1. The developer writes a function. This function often serves a specific purpose within the application code.
  2. The developer defines an event. The event is what triggers the cloud service provider to execute the function. A common type of event is an HTTP request.
  3. The event is triggered. If the defined event is an HTTP request, a user triggers the event with a click or similar action.
  4. The function is executed. The cloud service provider checks if an instance of the function is already running. If not, it starts a new one for the function.
  5. The result is sent to the client. The user sees the result of the executed function inside the application.

Cold Starts

This is a very basic use case, but it contains a situation that affects almost every use case for serverless: cold starts.

A cold start occurs when a server is not already hosting a function when the event is triggered. This is one of the downfalls of some serverless products, but you can decrease the impact of cold starts by pushing functions to the edge.

By executing functions on platforms with servers that are closer to end users, you can offset cold starts with low latency. This is similar to how content delivery networks operate where static content is delivered from an edge server rather than an origin server.

In addition to the lower latency achieved through edge delivery, FaaS providers like StackPath also use the Chrome V8 engine. This JavaScript engine has a faster runtime than traditional code compilers used in cloud serverless products like AWS Lambda. Combined with edge delivery, this eliminates issues related to cold starts.

Serverless functions executed at the network edge are part of a concept called edge serverless.

Examples of Serverless

Single function applications

Chatbots are popular among customer service and sales teams for communicating with customers and prospects when team members aren’t available. They’re also popular for internal communication and updates (e.g. Slack). But even though they provide substantial value, their code is relatively minimal and can often be turned into a single function.

Instead of hosting chatbot logic on VMs that are always running at capacity—even when functions are not being executed—chatbot service providers can run that logic on a serverless platform to decrease their operational costs. This way, the company is only billed when serving a request. They are not over-billed for VMs that are underutilized.

Companies can also create their own chatbots more easily and save on chatbot services. Developers can quickly create serverless Slack bots and other internal business apps without needing to think about the backend.

Event-driven pipelines

Developers of more complex applications can use FaaS “to orchestrate actions between different services and try to build in a way that creates event-driven pipelines,” according to Peter Sbarski. In his article about serverless architectures, he shows how FaaS removes the need for manual input from users for a video transcoding application.

WIth FaaS being leveraged, the video transcoding pipeline looks like this:

  1. The user uploads a video file to the storage service
  2. A transcode job is created with a function (executed w/ FaaS)
  3. The video is transcoded with the transcoder service
  4. The new video is saved to the storage service
  5. Video metadata is updated with a function (executed w/ FaaS)
  6. The metadata is saved to the database service
  7. An email notification is created with a function (executed w/ FaaS)
  8. The email is dispatched through the email sending service

In this example, three separate functions are used as a “glue” between other services (step 2, 5, and 7). This removes the need for hosting functions on a server that is always running—and being paid for—when no videos are being uploaded. It also ensures uptime during peak hours without needing to employ an auto-scaling service or pay for overages.

Get Started

These are just a few examples of what’s possible with serverless. To start creating serverless applications for your own needs, you can use the Serverless Framework for organization and a serverless platform like Serverless Scripting for low-latency execution.

Related Products
Request Demo

Have questions? Chat now with the live chat button on this page or request a demo from our edge experts.

Request a Demo
Get an Advantage

Stay informed of the latest edge news, updates and solutions.

Subscribe