Aug 15 2023
Jun 28 2023
Resource
BlogTopic
Edge ComputeDate
Mar 14, 2019Using Edge Containers and Edge VMs (virtual machines) we can gather evidence for latency killer options of which we have long been dreaming. With Edge Containers and VMs we can now bring pretty much anything closer to our users, so one of the most obvious use cases is to bring read-only data right to them.
Using products such as Redis, and features such as replication, it was already relatively easy to distribute data between multiple regions of any cloud provider. StackPath Edge Computing makes it even easier to distribute your data right next door to your users.
In this case, we started by postulating that we could get the data at least 100ms closer to our users without touching our application. For this example, we used a very simple application that allows us to set and get data from our Redis instance over http. Webdis is a good choice for this.
So, let’s prepare the container:
https://github.com/stackpath/webdis/blob/master/Dockerfile
The code is very simple. Most is based on https://github.com/donnlee/webdis and we added two bits:
Next all we need is to docker build, tag and docker push it to our container provider of choice. Something like:
docker build -t pessoa/webdis:latest .
docker push pessoa/webdis:latest
We know that by deploying the previous container to any cloud provider region, we will get higher response times as we move further away from that region. In order to run a test to retrieve some data from Redis, we need to set that data in first. This is done by just a curl command by using webdis:
curl http:///SET/myvalue/test
{"SET":[true,"OK"]}
and then
curl http:///GET/myvalue
{"GET":"test"}
These are the values we get from deploying to GKE in South Carolina, USA:
As expected, the lowest latency at 29ms is from a Virginia, USA monitoring location, which is about 800Kms from South Carolina, USA where the source of the data is. The highest latency comes from mainland China at 480ms which is almost as high as in Singapore at 436ms. Let’s retain this last one, we’ll come back to it at the end of this.
Now is when the interesting action really starts.
We keep the master of the data in the USA east coast and, without touching any of the code, we can make this faster to users in Singapore, but also in Warsaw, Poland; Los Angeles, USA; Toronto, Canada; Hong Kong, China; Tokyo, Japan; São Paulo, Brazil; and Melbourne, Australia.
Let’s start by using our previously built container and send it to run on those locations and also USA east coast. The master of our data will be:
Besides the self explanatory options, here are a couple of notes on this setup:
openssl passwd -1 'YourPasswd'
. This will prevent anyone from getting it in plain text from the environment itself.Only a few seconds later our containers are running worldwide:
We can now invoke the webdis service running on any of the containers:
Amsterdam:
$ curl http://151.139.83.6:7379/GET/myvalue
{"GET":null}
Tokyo:
$ curl http://151.139.176.3:7379/GET/myvalue
{"GET":null}
Note: We haven’t set any data to the myvalue key yet so that comes back null.
Now we are ready to set the USA east coast (Ashburn – 151.139.51.6) container to be the master of the data.
We ssh to it using the password we set earlier and set all the other Redis servers as its replicas. Note that earlier we only exposed the ssh and webdis service ports to the public Internet. The Redis server port on each container is not exposed, and as such, is only accessible on the private address space of our setup. This is the reason why we will be using those private IPs to configure the replication:
$ ssh root@151.139.51.6
root@151.139.51.6's password:
Last login: Fri Mar 1 01:33:08 2019 from 108.161.176.6
root@webdis-north-america-iad-0:~# for i in 10.128.32.3 10.128.64.2 10.128.96.2 10.128.128.2 10.128.0.3 10.128.112.2 10.128.176.2 10.128.80.2 10.128.144.2; do echo $i; redis-cli -h $i -p 6379 slaveof 10.128.160.2 6379; done
10.128.32.3
OK
10.128.64.2
OK
10.128.96.2
OK
10.128.128.2
OK
10.128.0.3
OK
10.128.112.2
OK
10.128.176.2
OK
10.128.80.2
OK
10.128.144.2
OK
All the other replicas will update automatically. Here’s Amsterdam and Tokyo we accessed before:
Amsterdam:
$ curl http://151.139.83.6:7379/GET/myvalue
{"GET":"test"}
Tokyo:
$ curl http://151.139.176.3:7379/GET/myvalue
{"GET":"test"}
What did we gain? Recall the 436ms at Singapore? Let’s see what it is now:
2ms?! Yes, 2ms!
Let’s look at all the locations we deployed to in detail with latency measured from the closer locations as simulating local users:
Amsterdam | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
Netherlands: Amsterdam | EDIS GmbH | 187ms | Netherlands: Amsterdam | EDIS GmbH | 4ms |
Netherlands: Amsterdam | StackPath | 185ms | Netherlands: Amsterdam | StackPath | 2ms |
Toronto | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
USA: Chicago | Rackspace | 58ms | USA: Chicago | Rackspace | 24ms |
Melbourne | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
Australia: Sydney | StackPath | 401ms | Australia: Sydney | StackPath | 29ms |
Australia: Sydney | Vultr | 399ms | Australia: Sydney | Vultr | 29ms |
New Zealand: Auckland | Zappie Host | 478ms | New Zealand: Auckland | Zappie Host | 381ms |
São Paulo | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
Brazil: São Paulo | StackPath | 283ms | Brazil: São Paulo | StackPath | 2ms |
Chile: Viña del Mar | EDIS GmbH | 270ms | Chile: Viña del Mar | EDIS GmbH | 119ms |
Hong Kong | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
Germany: Frankfurt | EDIS GmbH | 196ms | Germany: Frankfurt | EDIS GmbH | 41ms |
Germany: Frankfurt | StackPath | 198ms | Germany: Frankfurt | StackPath | 44ms |
Netherlands: Amsterdam | EDIS GmbH | 46ms | |||
Netherlands: Amsterdam | StackPath | 47ms |
Singapore | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
Singapore | StackPath | 436ms | China: Hangzhou (Shangai) | China VPS Hosting | 365ms |
Hong Kong | EDIS GmbH | 73ms | |||
Singapore | StackPath | 2ms |
Los Angeles | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
USA: North California | Amazon | 141ms | USA: North California | Amazon | 23ms |
USA: San Jose | StackPath | 167ms | USA: San Jose | SatckPath | 32ms |
USA: Seattle | Vultr | 137ms | USA: Seattle | Vultr | 58ms |
Tokyo | |||||
---|---|---|---|---|---|
Before | After | ||||
Location | Provider | Avg. Response | Location | Provider | Avg. Response |
Japan: Osaka | Azure | 337ms | Japan: Osaka | Azure | 20ms |
Japan: Tokyo | StackPath | 311ms | Japan: Tokyo | SatckPath | 3ms |
Japan: Tokyo | Vultr | 313ms | Japan: Tokyo | Vultr | 3ms |
All these functions are also available using an API. As example in https://developer.stackpath.com/en/api/workload/#tag/Instance we find how to get the private and public IPs of all the instances.
The data was collected using Monitors deployed worldwide in multiple providers.
We are now able to use more exotic locations and no longer restricted to the typical cloud provider limited data center locations. For example, Warsaw and Los Angeles were chosen from StackPath’s 45 Edge Computing locations.