Aug 15 2023
Jun 28 2023
Resource
BlogTopic
Edge ComputeDate
Mar 25, 2021Speed is everything on the Internet. Studies repeatedly show that a website, online application, or game’s load speed has a dramatic impact on end-user satisfaction, sales, and usage. Slow-loading directly leads to decreased page views and engagement and increased abandonment.
The typical solution for accelerating a site or application is to leverage a content delivery network. But while CDNs — like the StackPath CDN — can offer a great deal of configurability, some workloads need something very specific and unique (if not total power and control).
Some workloads need a private CDN.
Building a CDN yourself isn’t that simple. (Trust us. We’ve built a few.) You could rent servers in data centers close to your clients, but this quickly becomes difficult. Or you can disperse your application across multiple cloud regions, but you’re then limited by the data center locations your cloud provider offers. Even if you do set up applications servers around the world, how do you geographically locate a user and automatically serve them content from your closest server?
Fortunately, our edge platform has all of the parts you need. Virtual machines located close to end-users as opposed to public cloud providers whose servers live in more centralized data centers. Edge computing locations around the world so you are always able to serve content to end users from nearby. A private network backbone connecting them, to avoid the turbulence and bottlenecks of public internet routes.
In this article, we walk through creating a three node, multi continent CDN network on StackPath edge compute VMs. These VMs will serve files from persistent storage with the NGINX web server on an anycast IP address.
If you expect your needs would be better served by a configurable HTTP cache like Varnish, you could easily install that instead. We’re using NGINX in this guide because it’s an extremely fast general purpose web server that can handle nearly any task you might want your CDN to perform.
Before you begin creating the VM fleet, you should have the following:
First, sign up a StackPath account. The first step of the signup process asks you what service you want to use. Select the Edge Compute service then complete the signup process.
When you complete the signup process, you will be logged into your StackPath control panel on the Create Workload page. Workload is the StackPath term for a VM (or container) deployment that uses the same operating system (OS) image and is managed and billed as a single unit.
Fill in the Create Workload page as follows:
Click Continue to Settings to proceed to the Workload Settings page.
Complete the form as follows:
The VPC is a virtual private network that StackPath creates between your VMs. Each VM is allocated a private IP address over which they can securely communicate.
Click Continue to Spec to move to the Spec, Storage, & Deployment page.
Click Create Workload to spin up the VMs.
This will take a couple of minutes. When they are online, you will see them on the Workloads Overview page along with their public and private IP addresses:
You also find the anycast IP address on this page. Since you will need this when you start configuring NGINX, you should resolve your domain name to the anycast IP address now so it’s ready.
Repeat the following install and configuration steps on all your VMs. Note that if you plan to create a production-ready custom CDN, you should consider using provisioning and configuration management tools like Terraform and Ansible to create and configure your VMs.
First, log in to your VM using SSH. The default usernames are as follows:
Distribution | User |
---|---|
Ubuntu | ubuntu |
Debian | debian |
CentOS | centos |
Next, perform a system update and reboot to ensure your VM is running the latest packages:
Debian and Ubuntu:
sudo apt update
sudo apt upgrade
sudo systemctl reboot
CentOS:
sudo dnf update
sudo systemctl reboot
Log back into the server when the update and reboot are complete. Then, install the NGINX web server utility as follows:
Debian and Ubuntu:
sudo apt install nginx
CentOS:
On CentOS we’ll install both NGINX and the nano text editor and then start the NGINX service:
sudo dnf install nginx nano
sudo systemctl enable --now nginx
Before you configure NGINX, create a folder under the persistent storage location to contain the files you want to serve:
sudo mkdir /var/lib/data/http/
Next, create a simple HTML file to identify each server by its location. First, open the file in a text editor:
sudo nano /var/lib/data/http/index.html
Then, paste in the following:
Tokyo
Replace the city name with the VM location. This will help identify your servers when you test them.
Next, set the new directory and file ownership so that NGINX can read it with the following commands:
Debian and Ubuntu:
sudo chown -R www-data:www-data /var/lib/data/http
CentOS:
sudo chown -R nginx:nginx /var/lib/data/http
Next, we create the NGINX configuration that serves your files from /var/lib/data/http
.
Debian and Ubuntu:
On Debian and Ubuntu, first remove the default configuration file:
sudo rm /etc/nginx/sites-enabled/default
Then, create a new configuration file with nano:
sudo nano /etc/nginx/sites-available/anycast.conf
Copy and paste the following into the new file:
server {
listen 80;
server_name ;
root /var/lib/data/http;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Next, create a symlink from /etc/nginx/sites-available/anycast.conf
to /etc/nginx/sites-enabled/anycast.conf
so NGINX loads the new site configuration:
sudo ln -s /etc/nginx/sites-available/anycast.conf /etc/nginx/sites-enabled/anycast.conf
Finally, reload NGINX:
sudo systemctl reload nginx
CentOS:
For CentOS, open a new site configuration file at /etc/nginx/conf.d/
:
sudo nano /etc/nginx/conf.d/anycast.conf
Then, copy and paste the following:
server {
listen 80;
server_name ;
root /var/lib/data/http;
index index.html;
location / {
try_files $uri $uri/ =404;
}
}
Finally, restart NGINX:
sudo systemctl restart nginx
Your VMs are now configured and ready to test.
You should confirm the CDN cluster is working as expected.
First, copy each of the public IP addresses from the StackPath WorkLoads Overview page and paste them into your browser. This will load the HTML file you created that contains the name of their city, for example:
Next, browse to the domain name that you resolved to the anycast IP address. This will connect to the server closest to you.
Testing that the anycast domain name is working from other physical locations is a little more challenging. If you have access to a VPN, set your endpoint to be close to your other PoPs and test again. Alternatively, use a website like GeoPeeker that displays your website as it appears from other locations around the world.
You have now created a 3-node cluster of VMs that take advantage of StackPath’s edge computing platform to serve your assets from a PoP closest to your clients. Your new VMs are fully capable Linux servers, which means you can benefit from their locations for other latency sensitive applications, such as video conferencing. To do this, install a conference server on each node that is closest to your office and access it from the VM’s public (rather than the anycast) IP address.
From here, your next step may be to get an SSL/TLS certificate for your anycast domain name. If so, the regular IP based verification method will not work as you cannot guarantee where the validation requests from the registry end up. Instead, use the DNS based verification offered by many registries.
To learn more about deploying flexible, low latency workloads closer to your users than ever before, request a free demo from a StackPath edge expert.