Definition
Serverless architecture is a software design pattern in which long-lived server processes are replaced with a combination of third-party cloud services and event-driven ephemeral functions.
Overview
As technology evolves and the market demand for applications soar, software development patterns are changing to keep up with the pace. Over the last few years, several innovations such as virtual machines and containers have allowed developers to migrate more of their operations to the cloud. The latest innovation is serverless computing where cloud vendors are, at least in principle, in charge of all operations.
Serverless architecture is a software design model that offloads the cost and complexity of server management to cloud vendors. It encourages the use of third-party services and serverless functions to save on operational costs and increase the productivity of developers.
How Serverless Architecture Works
Serverless is an abstraction in the same manner as “the cloud,” meaning that it hides operational complexity. Under the hood, vendors are definitely using servers. But from the developer’s point of view there are no servers that need managed.
Serverless architecture design encourages the adoption of two areas of cloud computing:
- Backend as a Service (BaaS): a service model with offerings that range from essential components such as storage, databases, push notifications, caching, and authentication; to more advanced services like machine learning, big data, and blockchain. Cloud vendors are continually introducing new services for developers to include in their applications.
- Functions as a Service (FaaS): also known as serverless functions, is a service model that allows developers to run code directly in the cloud without the need to build packages or maintain any infrastructure.
Benefits of serverless architecture include:
- Cost reduction: serverless applications tend to cost less to run since there aren’t any always-running processes and pricing is based on usage.
- Scalability: serverless applications scale naturally. The cloud vendor provisions resources as needed so there is never under- or over-provisioning. Compared to auto-scaling server groups or container orchestration (i.e. Kubernetes), serverless scalability is more fine-grained and can better handle unexpected traffic spikes.
- Higher productivity: serverless applications are easy to prototype thanks to the abundance of ready-to-use third-party services. Furthermore, there is no need for any build or package process to make a release. Going from a working prototype to a production-ready application is automatic.
- Smaller development teams: serverless applications are made of loosely coupled components. In total, there are more components to manage. But since there are fewer hard dependencies between them, serverless architecture is conducive to breaking down large developer teams into multiple smaller, more focused groups.
- Flexibility: compared to server process applications, serverless is more flexible which makes it easier to adapt to change or experiment with. As a result, serverless applications can ship features faster.
Trade-offs of serverless architecture include:
- Cost forecast: serverless applications tend to cost less. However, higher traffic than usual can increase costs unexpectedly. Serverless applications have less of a flat rate so it’s important to monitor usage closely.
- Latency: serverless functions can suffer from what is called the “cold start” problem, which is caused when cloud vendors deallocate resources after some time of inactivity. Infrequently used functions can have longer-than-usual startup times. Developers should take precautions to ensure that the functions are called periodically to keep the resources allocated and/or use a serverless edge product.
- Resource Limits: serverless functions have limits in memory, CPU, and the execution duration. Consequently, a serverless architecture may not be a good fit for high-performance computing.
- Observability: debugging, monitoring, and logging is difficult to do with serverless applications because functions are ephemeral. As serverless computing tooling matures, observability is expected to increase.
- Lock-in: each cloud vendor has its own way of delivering serverless so migrating between clouds can be challenging. Some frameworks like the serverless framework alleviate this problem with a plugin structure that supports many clouds.
Example of Serverless Architecture
Imagine that you’re building a text analysis application. The non-serverless approach is to make a single package with everything needed to do the job. The serverless approach, on the other hand, is to break it down in multiple serverless functions.
For example, take the following function to count words:
addEventListener("fetch", event => {
event.respondWith(handleRequest(event.request));
});
function countWords(str) {
return str.trim().split(/s+/).length;
}
async function handleRequest(request) {
try {
const url = await request.text();
const response = await fetch(url);
const text = await response.text();
const count = countWords(text);
return new Response(count.toString());
} catch (e) {
return new Response(e.stack || e, {
status: 500
});
}
}
Note: You can try this function in StackPath’s Serverless Sandbox. Copy the code, switch the right panel to Raw mode, and change the method to POST. In the Body, type the URL to analyze. (Text websites work best.)
Once the function is deployed, it’s just a matter of calling it from the browser or mobile device:
fetch('http://"your_app_url.com/api/word/count', {
method: 'POST',
headers: { "Content-Type": "text/plain" },
body: "http://$URL_TO_ANALIZE"
})
.then((res) => {
do_something(res);
});
Adding more functionality is easy. Simply create another function. For example, you could change the count function to do some ranking and get word usage analysis:
function getFrequency(str) {
let frequency = {};
const words = str.trim().split(/s+/);
for (i = 0; i < words.length; i++) {
if( words[i] in frequency ) {
frequency[words[i]]++;
}
else {
frequency[words[i]] = 1;
}
}
return frequency;
}
function sortByCount(words) {
let sorted = [];
for (let word in words) {
sorted.push([word, words[word]]);
};
sorted.sort(function(a, b) { return b[1] - a[1];});
return sorted;
}
Since pricing is based on usage, you can cut down costs by caching previous word counts. You could, for instance, provision a key-value service to act as a cache. With it, clients can directly try the cache before calling the functions.
As you can see, the serverless approach makes it possible to go from an idea to a prototype and from a prototype to a production-grade application with little effort. New features can be experimented with quickly and applications can grow more organically.
Key Takeaways
- The serverless model is the easiest way for building scale-to-anything applications.
- Serverless applications typically cost less to run and are quicker to develop.
- Traffic spikes are not a problem but you should closely monitor usage.
- Low-latency and high-performance requirements are difficult to fulfill with a serverless architecture but this can be remedied with a serverless edge product.