Serverless is a method for executing functions and running cloud compute services on an as-needed basis.
Serverless is a misnomer in the sense that servers are still used by cloud service providers to execute code for developers. The reason “less” is appended is because developers spend less time on backend development and more time writing application code. After code is pushed, servers scale on-demand and are fully managed by the cloud service provider.
Companies also spend less money on cloud services when serverless is used. Instead of paying a flat rate for virtual machine (VM) instances that are always under- or over-utilized, the company only pays the cloud service provider for time that their code is actually executed. Time is measured in the form of requests, often billed per million requests. For instance, StackPath charges around $0.60 per million requests for its serverless product.
To date, serverless is the most scalable and cost-effective method for cloud computing and edge computing.
Developers rely on serverless to execute specific functions. Because of this, cloud service providers offer Functions as a Service (FaaS). Below, you can see how functions are written and executed in a serverless way.
This is a very basic use case, but it contains a situation that affects almost every use case for serverless: cold starts.
A cold start occurs when a server is not already hosting a function when the event is triggered. This is one of the downfalls of some serverless products, but you can decrease the impact of cold starts by pushing functions to the edge.
By executing functions on platforms with servers that are closer to end users, you can offset cold starts with low latency. This is similar to how content delivery networks operate where static content is delivered from an edge server rather than an origin server.
Serverless functions executed at the network edge are part of a concept called edge serverless.
Chatbots are popular among customer service and sales teams for communicating with customers and prospects when team members aren’t available. They’re also popular for internal communication and updates (e.g. Slack). But even though they provide substantial value, their code is relatively minimal and can often be turned into a single function.
Instead of hosting chatbot logic on VMs that are always running at capacity—even when functions are not being executed—chatbot service providers can run that logic on a serverless platform to decrease their operational costs. This way, the company is only billed when serving a request. They are not over-billed for VMs that are underutilized.
Companies can also create their own chatbots more easily and save on chatbot services. Developers can quickly create serverless Slack bots and other internal business apps without needing to think about the backend.
Developers of more complex applications can use FaaS “to orchestrate actions between different services and try to build in a way that creates event-driven pipelines,” according to Peter Sbarski. In his article about serverless architectures, he shows how FaaS removes the need for manual input from users for a video transcoding application.
WIth FaaS being leveraged, the video transcoding pipeline looks like this:
In this example, three separate functions are used as a “glue” between other services (step 2, 5, and 7). This removes the need for hosting functions on a server that is always running—and being paid for—when no videos are being uploaded. It also ensures uptime during peak hours without needing to employ an auto-scaling service or pay for overages.
These are just a few examples of what’s possible with serverless. To start creating serverless applications for your own needs, you can use the Serverless Framework for organization and a serverless platform like Serverless Scripting for low-latency execution.