Introducing Containers and Virtual Machines at the Edge
Edge Computing has arrived at StackPath with the announcement of the availability of Edge Containers and Edge VMs (virtual machines).
When we launched our serverless scripting product, EdgeEngine, the goal was to allow low latency execution of complex logic as close to the user as possible. Being able to intercept a request, perform some processing, personalize the response, and return the response within milliseconds allows for improved response times, lower costs, and better user experiences.
Serverless is a great way to implement certain types of functionality, but what if you also need to perform callbacks to a central API, make database lookups, or process image data through a custom library? These still involve making calls back to your infrastructure, perhaps running in a central public cloud region, which means more latency, bandwidth costs, and slower response times.
What if you could host your application in containers and VMs running at the edge, taking advantage of the global StackPath network to move your workloads closer to users?
Well, now you can do just that. Introducing StackPath Edge Computing with Edge Containers and Edge VMs.
Specify your image – we do the rest
StackPath Edge Computing is a fully managed environment that allows you to run your container and VM workloads in any of our global PoPs. With no clusters to manage, you specify the workload requirements such as instance CPU/RAM sizing, where you want to deploy your images, how many instances to run globally, and what disk resources you need. Then specify your image and we do the rest. StackPath will ensure that the number of instances you specify per location are always running, manage the reliability and redundancy of the deployment, and deal with all security patches. You need only to focus on what’s running inside your containers or VMs.
Our web portal allows easy management of your workloads, which can be defined graphically or using a YAML configuration file. Our API mirrors the portal functionality so everything can be automated.
High performance networking
Workloads are deployed on the same platform that all StackPath products run on. With free, low latency connectivity between other instances and StackPath services within the same PoP, you can take advantage of sophisticated request flows.
For example, you can host content on our CDN, intercept requests with EdgeEngine serverless scripts, hand off heavy processing to a manipulation library running on containers, then return a customized response back to the user, all within the same edge location and completed within milliseconds. Eliminate unnecessary network hops back to centralized infrastructure and save on bandwidth costs at the same time as improving response times.
Global routing using anycast IPs
Assign anycast IPs to your workloads so user traffic enters the StackPath network at the closest PoP. Workloads can be deployed right in the PoP where traffic enters the network, allowing you to immediately service customer requests. All PoP-to-PoP traffic is routed over the secure StackPath private backbone, avoiding public internet and taking optimized routes to your workload endpoints.
Diverse PoP locations
Centralized public cloud tends to focus on major regions, using data center campuses outside of major cities to help minimize costs. In contrast, StackPath PoPs are located in premium locations, with a diverse set of regions all over the globe. Where it still makes sense to use public cloud or your own facilities, you can take advantage of StackPath’s premium networking with 65Tbps of total capacity, 26+ tier 1 carrier links, and 2,700+ peering partners. Move your latency sensitive workloads to the StackPath edge while maintaining the benefits of public cloud with optimized interconnect and backhaul to major cloud providers.
Flat pricing, pay as you go
Pricing is the same, globally. Pay only for what you use, by the hour.