The traditional cloud computing model offers on-demand, metered access to computing resources—storage, servers, databases, and applications—to users who do not want to build, buy, or run their own IT infrastructure. Public cloud service providers maintain and run large server farms whose resources are shared between users, with virtualization techniques that provide isolation and security of individual user data. Site redundancy across regions provides for recovery from outages and disasters, with all the monitoring and management aspects of keeping the cloud up and running transparent to cloud users.
While distributed computing spreads computation workload across multiple, interconnected servers, distributed cloud computing generalizes this to the cloud infrastructure itself. A distributed cloud is an execution environment where application components are placed at appropriate geographically-dispersed locations chosen to meet the requirements of the application.
Such requirements include:
- Location: to enable more responsive and performant service delivery for certain types of applications, where latency is critical and bulk data transfer to and from a central cloud is expensive.
- Regulations: which may require that data never leaves the user’s country, as is the requirement in the EU.
- Security: to ensure that certain data and processes remain within an enterprise’s private cloud or data center, with which a public cloud is integrated.
- Redundancy: beyond that provided by local, regional, or national site redundancy to mitigate large scale outages that can affect enterprises.
The distributed cloud service provider ensures the end-to-end management for the optimal placement of data, computing processes, and network interconnections based on the above-mentioned requirements. And it appears as a single solution from the cloud user’s point of view.
A Content Delivery Network (CDN) is one example of a distributed cloud, where storage (e.g., video content) is positioned in geographically diverse regions to reduce the latency of delivery. Enterprises using CDNs to distribute content get the benefits of scaling both storage and performance, which the CDN provider makes transparent.
Relationship to edge computing
Edge computing is a solution where data is processed as close as possible to the place where it is generated. Applications that can benefit from edge computing are those where low latency and high throughput are critical, or where it is too expensive to send the data back to a distant cloud for processing. Other ways edge computing offers benefits includes cases where the transport network is bandwidth constrained or unreliable, or the data is too sensitive to be sent over public networks, even if encrypted.
Therefore, edge computing is not a different computing paradigm but an extension of distributed cloud computing. The two models can be reconciled by considering edge computing resources as a “micro” cloud data center, with the edge storage and computing resources connected to larger cloud data centers for big data analysis and bulk storage.
Please see the Examples section below for some usage scenarios.
How Distributed Cloud Computing Works
The key advantage of using cloud services is the ability of service users to not maintain and operate their own IT infrastructure, and shift CAPEX to OPEX by using the utility-like model of purchasing computing and storage on demand.
With distributed cloud computing, some additional features are open for purchase: users can ask that certain data remain within specific regions, or that a certain performance target for latency or throughput be met. These are expressed as Service Level Agreements (SLA) between the user and the cloud provider.
It is the job of the cloud provider to hide the complexities of how such SLAs are met. This may include building out additional cloud infrastructure in specific regions or partnering with cloud providers already in those regions. Additionally, high-speed data interconnections need to be set up between these geographically-dispersed data centers.
Major cloud providers have their own technology to integrate into these dispersed cloud data centers to ensure the intelligent placement of data, computing, and storage to meet the SLAs, all transparent to the cloud service users.
From the user’s perspective, it should be incredibly easy to distribute cloud compute resources across the world. Below is an example of how users can deploy compute workloads (at the “cloud’s edge” using StackPath) to a variety of points of presences (PoPs) around the world with varying levels of CPU power and number of instances.
Examples of Distributed Cloud Computing
Some usage scenarios to motivate distributed cloud computing include the following:
Intelligent Transport: Autonomously driven trucks moving in echelon can locally process the data from on-board and road sensors to maintain a steady speed and separation between each other and other vehicles, all while sending traffic and engine data back to a central cloud. Their path to the destination is monitored by a fleet management application in a regional cloud, which analyses data from multiple vehicles to determine optimal routes and identify vehicles for maintenance.
Intelligent Caching: A major over-the-top video service provider uses a central cloud to transcode and format videos for different device types served over different networks. It caches content in multiple formats in geographically dispersed CDNs. In anticipation of major demand for a newly released series in a given region, it pre-positions that content in caches closest to end users—for example, storage collocated with cable head ends to serve a residential location, or at 5G base stations in dense urban areas for mobile viewing.
- Distributed cloud computing expands the traditional, large data center-based cloud model to a set of distributed cloud infrastructure components that are geographically dispersed.
- Distributed cloud computing continues to offer on-demand scaling of computing and storage while moving it closer to where these are needed for improved performance.
- Edge computing is a complementary aspect of distributed cloud computing, and represents the farthest end of a distributed cloud architecture.