Aug 15 2023
Jun 28 2023
A virtual machine (VM) is a physical machine running in the cloud or on-site. Edge computing is a way of running services and computation closer to the user. It opens the door to the creation of a new form of VM known as an edge VM. An edge VM is comparable to a traditional VM but differs in that it’s situated closer to users. This results in improved performance and reduced latency. Edge VMs are most prominently used by companies whose clients are not located in a data center.
In this tutorial, our goal is to create a Kubernetes cluster whose master node is living at any cloud provider with worker nodes running at the edge. We’ll be using StackPath’s edge computing platform to set up three edge VMs. Then we’ll extend our Kubernetes cluster to the edge using KubeEdge.
Before taking a deep dive into StackPath’s edge VMs, we must understand why we want to set up Kubernetes at the edge. Kubernetes is an excellent tool for managing three large containerized workloads at scale. It has the benefits of automation, scalability, and portability. We can also improve performance and user experience by extending the cluster to the edge closest to your user pool.
We can run the entire solution on the edge VM or move only some of the workloads to the edge. For example, if you are an Asia-based streaming company, but most of your users are from the North American region, the latency is likely high. Understandably so, because of the distance, which affects performance. You might want to move your HQ and data center to your users’ region but find you’re unable to because of government policies. For scenarios such as this, it’s best to provide streaming video to end-users from the nearest edge location.
Now, let’s jump to setting up KubeEdge on StackPath VMs. The prerequisites of this tutorial are:
The first step is to create a StackPath account. Once your account is set up, click the Edge Compute tab on the left sidebar. Then click Create Workload.
On the next screen, give a relevant name to the image and select “Workload Type as VM.” Please note that currently, the KubeEdge is only available to Ubuntu and CentOS. Then click Continue to Settings.
The next step is to set up the public ports and enter the first boot SSH keys. You can use this later to securely log on to that VM. Head over to this page to get a detailed idea of how to generate and set the SSH keys.
In the next section, we define the hardware specification and deployment target. Ideally, Kubernetes requires at least two GB of RAM and two or more vCPUs. For this tutorial, we create three deployment targets. The deployment targets are running as worker nodes in our Kubernetes cluster, which you have hosted near the location of your users to improve the performance.
Clicking Create Workload starts the servers at your given location. StackPath is swift and takes around 30 seconds to 1 minute.
For the following steps, get ready with your multi-terminal screens. We’ll be installing the KubeEdge binaries to each VM that we have up and running. Note that both master and worker must have Kubernetes installed to initiate the KubeEdge. Again, KubeEdge currently only supports Ubuntu and CentOS. You can check Kubernetes compatibility with KubeEdge to find the version you need.
There are currently two ways to get the keadm file (KubeEdge):
For this tutorial, we’ll be building the keadm binaries from the source code. Open the terminal and execute:
sudo git clone https://github.com/kubeedge/kubeedge.git
$GOPATH/src/github.com/kubeedge/kubeedge
cd $GOPATH/src/github.com/kubeedge/kubeedge
make all WHAT=keadm
There’s a good chance that you’ll see the error that “the following command not found,” along with some suggested actions. If that’s the case, install the required package by running the advised command. It can take roughly two to six minutes to complete, depending upon the specification of your VMs.
When complete, navigate to the installation folder to play with the keadm. If you’ve followed the exact steps above, the KubeEdge binary is available at:
$GOPATH/src/github.com/kubeedge/kubeedge/_output/local/bin/
To verify that the binary installed properly, navigate to that path by executing this command:
cd $GOPATH/src/github.com/kubeedge/kubeedge/_output/local/bin/
Then run ./keadm to see the output as described in the image below.
Now it is time to start running KubeEdge on your Kubernetes cluster. Open the terminal attached to the master node of your Kubernetes cluster and navigate to the KubeEdge installation path mentioned above.
The keadm installation command takes one argument, – – kube-config, as mandatory. This creates the root directory of your Kubernetes c. It also accepts another argument, – -advertise-address, which is the IP address to which your edge worker node can connect. If you don’t provide the advertise IP argument, it takes the default value of the local IP address. You can initialize the master node’s edge component by running:
./keadm init --kube-config=YOUR_KUBERNETES_CONFIG_PATH
This command installs the cloudcore and other important binaries for the master node. If it shows output like the image below, the cloudcore is running. We now must perform only one more step before configuring the edge nodes.
Like the kubeadm, we can get a token that edge nodes can use to connect to the master node. Copy the token that you get from executing the following command:
./keadm gettoken --kube-config=YOUR_KUBERNETES_CONFIG_PATH
On the edge side, we need to execute a single command to join the cluster. On the master side, ports 10000 and 10002 must be available as you must access them from the edge nodes. The keadm join command takes one mandatory argument: — cloudecore-ipport. The join command also installs the necessary dependencies required to run on the edge.
sudo ./keadm join --cloudcore-ipport=EXPOSED_IP --token=TOKEN
If the output says, “KubeEdge edgecore is running,” we have successfully configured the edge component of our Kubernetes cluster running on an edge VM.
To verify if the edge nodes joined the master successfully, head back to the terminal of the master node and run:
Kubectl get nodes
For this tutorial, we used three StackPath VMs as edge worker nodes and one VM as a master node, which can be from any cloud provider.
Now you can run any deployments and services on your Kubernetes cluster, whose worker nodes are situated at the edge location nearest to your user pool. This can significantly improve performance.
In this tutorial, we learned how to create edge VMs closest to the user using StackPath. After VM creation, we installed the KubeEdge binaries inside both master and worker nodes. We’ve also shared two notes to remember: make sure your Kubernetes master node allows you to use SSH to log on to your master node and keep in mind the master node is running either Ubuntu or CentOS operating systems. After installing KubeEdge, we configured the master component to start the cloudcore and get a token. Then, by using that token and the exposed IP address, we joined the master node to the edge node.
Ready to try using StackPath to create your own edge VMs and containers? Request a demo or create an account today.