Oct 25 2023
Aug 15 2023
Resource
BlogTopic
Edge ComputeDate
Jan 12, 2022What is a multi-cloud service mesh, what is the “edge,” and how do they fit together? This article walks through the process of integrating the edge with a service mesh. It provides the high-level details of how to achieve a multi-cloud and edge architecture using StackPath’s Edge platform.
A multi-cloud service mesh is a network architecture. It provides inter-application traffic, security, and observability for multiple cloud computing providers and storage devices. In many environments, service meshes are multi-service and multi-cloud. The multi-cloud approach uses the best features of cloud providers for workloads and end-users.
A service mesh functions by routing requests between microservices through the proxies located alongside each service. These proxies, or “sidecars,” form a mesh network. The mesh network provides:
You can set policies and tweak the settings that define these features from a control plane user interface (UI). The mesh network also distributes configuration updates to the sidecars and receives metrics that the sidecars collect.
Achieving this level of communication and security between microservices in the absence of a service mesh is tedious. In addition, each microservice requires coded logic that can impact development and distract from business goals. A delay or failure to diagnose communication failures is even more concerning because the logic governing communication is located within each service.
Organizations commonly use the open-source service meshes Istio and Linkerd for production workloads. Istio is widely popular and has support from Google, IBM, and Lyft. It delivers enterprise distributed applications on Kubernetes, virtual machines (VMs), and container workloads at scale. Linkerd is lightweight, simple, and performant but does not provide ingress and only works with Kubernetes workloads.
Another competitive multi-platform service mesh is HashiCorp’s Consul. It provides a platform for modern application networking and security with identity-based authorization, L7 traffic management, and service-to-service encryption.
The value of a service mesh increases as application services increase in quantity and complexity. While microservice architectures are the most common for a service mesh, we should not exempt monolithic applications.
Organizations often run service meshes in single-node, multi-node, multi-cluster, and multi-cloud architectures in public cloud environments. However, it’s best to run some services on edge infrastructure, close to users.
Extending a multi-cloud service mesh to StackPath’s edge architecture provides access to more than fifty points of presence (PoP) across the globe, including seventeen in the USA. Benefits of StackPath’s reach and proximity to end-users include quick loads, lower latencies, fine control, and cost-effectiveness.
Enterprise infrastructure continues to evolve. It began as on-prem and moved to the cloud. Then it transitioned to a hybrid of these. Now, it has moved to the edge or a combination of all three. This progression is due to the advancements in computing and a demand for connected distributed systems.
According Gartner’s Predicts 2021: Cloud and Edge Infrastructure report, the pandemic and economic slowdown serve as catalysts for digital innovation and adoption of cloud services. While the use of public clouds to run these workloads has mainly been adequate, the same report acknowledges that there is “a growing demand to address lower latency, [and to] process [exabytes of] data on the edge.’”
Adopting a multi-cloud-edge approach with a service mesh makes business sense. We benefit from the utility of public clouds and the proximity to end-users, which also promotes savings. Even in a multi-cloud-edge architecture, the connectedness of applications in a service mesh remains impressive. We can reference applications by name. The deployment to the edge from a public cloud is seamless, and the application remains accessible to our cloud-deployed services.
Summarily, a multi-cloud and edge service mesh eliminates many of the headaches involved in traditional multi-environment deployments. Previously, we would have needed to treat each cloud or edge environment separately, then manually manage communications and security between services running in different environments. Now, unified communication is possible across all settings.
StackPath’s VMs enable us to spin up Kubernetes clusters at the edge, adding to an existing service mesh. To begin using the edge in a multi-cloud service mesh, we simply add StackPath’s VM to our Kubernetes cluster using a UI or API. This method makes deploying any application to the edge easy, whether a tiny microservice or a more traditional monolithic application. The most common flow in this architecture is for traffic ingress at the edge, with edge-hosted services calling into cloud-hosted services as needed.
StackPath’s VMs come in Debian, Centos, and Ubuntu OS distributions, with CPU and RAM ranges of 1 to 8 and 2 GB to 32 GB, respectively, including onboard disk storage fixed at 25 GB. We can extend onboard storage from 1 GB to 1 TB on the mount path e.g. “/var/lib/data” if we require more space. Organizations commonly use the AWS S3 bucket for object storage — an example of an edge calling into a cloud on an ad hoc basis.
Because all services are part of a unified mesh network, we can move them to StackPath as needed to provide maximum performance. For example, we may want most traffic ingress to occur at the edge, with edge-hosted services calling into cloud-hosted services as required.
We have the option of assigning 1 public IP (or up to five private IPs using our Virtual Private Cloud) per instance — and quickly, compared to some popular public cloud providers. We can configure inbound access by specifying our preferred port, or range of ports (TCP/UDP), and secure shell (SSH) key for login. We can enable routing to the nearest edge location by selecting the anycast IP option.
StackPath’s edge locations reside in significant internet exchanges in thirty-five markets around the globe. Many of StackPath’s Edge services, VMs, and containers are available across North America, Europe, Asia, South America, and Australia. StackPath provides an easy-to-use and advanced orchestrated autoscaling feature for VMs and containers. Finally, the configuration cost appears clearly at the bottom of the page.
StackPath’s EdgeEngine comprises a customer portal, comprehensive APIs, and real-time analytics and metrics as a single automated platform. This platform provides complete control over every device in every StackPath edge location worldwide. So, whether our preferred methods of computer control, automation, and monitoring are via scripting or UI interaction, StackPath can accommodate.
With StackPath averaging a latency of only 29 ms, it is three times as fast as the public cloud. App developers use StackPath’s edge service for video streaming, video-on-demand, video gaming, and advertising because download speeds and cost savings are superior to what a public cloud offers.
The following ventures benefit from using a service-mesh multi-cloud edge architecture.
Ramping production in factories can lead to lapses in quality, eventually costing a fortune in reputation and damages. Ideally, a manufacturer would want to leverage the edge for assembly line detection and analysis before a failure event.
Monitoring systems for smart homes, estates, communities, and hospitals may provide video footage in real-time. The edge can improve retrieval time from storage to enhance the security of property and human life. These functions can provide an advantage when detecting and preventing accidents, disease, and crime.
More legacy banks are opening their APIs to fintech and third-party app developers. You can now perform banking activities ranging from account registration to fund transfers, payments, micro-lending, investments, and trading. Transactions can include cryptocurrency on handheld and laptop devices that run on the cloud. These services can benefit from the speed of the edge architecture, leading to more significant customer acquisition and retention.
StackPath provides containers that you can spin up for edge ingress. Other fintech-friendly StackPath services include a content delivery network (CDN), which boasts an 80 percent cache hit ratio, serverless scripting with zero warmup time, and a web application firewall (WAF).
The internet was abuzz with Mark Zuckerberg’s announcement of the metaverse and the demo of virtual reality. Apps intending to leverage this technology will show up soon, and these apps will benefit from edge architecture’s performance. StackPath is always looking to support early adopters of innovative and disruptive technologies.
Post-pandemic food security will be essential. Farms can benefit from monitoring systems powered by the edge, enabling the oversight of crop and animal health, estimating the impact of investments, and forecasting.
AI and machine learning (ML) development and the deployment of their relative models at the edge save time and cost by remaining closer to the user.
The most significant benefits of edge services is the ability to increase network performance by reducing latency. Adding this capacity to a multi-cloud service mesh provides robust inter-application traffic, secure communication within microservices or a monolithic application, cost savings in bandwidth and operating expenses, brand-defining processing, and increased delivery speeds for the end-user.
We can add StackPath’s edge services, including VMs and containers, to a multi-cloud service mesh to leverage the best of the public cloud and edge for any venture. Supporting services on the StackPath platform such as CDN, API, serverless, and WAF features provide thorough assistance when necessary. To get started, check out StackPath to learn more and take the first steps toward developing and deploying an application at the edge.