Aug 15 2023
Jun 28 2023
Resource
BlogTopic
Edge ComputeDate
Jan 14, 2021For those in the video game industry, a significant amount of potential revenue is up for grabs over the next three years. Analysts estimated that the 2020 cloud gaming market alone would be worth $548.7M, and they expect it to grow exponentially to $4.8 billion by 2023[1]. Though several factors will decide the winners and losers in this highly-competitive battle royale, in today’s marketplace there is one component above all others that will define who wins – the ability to meet or exceed expectations of gamer experience.
In this fast-paced industry of over 2,400 companies in just the U.S.[2], gamers expect ground-breaking, constant innovation and evolution of experience – a hunger the gaming industry is attuned to feed however possible. Any attempt at a flawless video game experience requires that every component – from development through delivery – work without failure. The industry demands consistent perfection, punishes those who fall short, and executing this lifecycle flawlessly only buys you enough credibility to last until the next release cycle starts.
To prevent a myriad of issues that can damage the game’s reputation, all games need a competitive edge. More specifically, as many industry-leading developers are quickly discovering, edge compute, the “compute” part of edge computing, can make a notable difference in preventing lag and latency, improving deployment and download speeds, and more.
We’ll go into these benefits in more detail in a moment, but first, let’s define exactly what we mean when we say “edge compute.”
At StackPath®, edge compute is virtual machines (VMs) and containers that run in our global network of edge locations. These VMs and containers are physically closer to more end-users than are VMs and containers running in centralized, hyper-scale public cloud data centers. Increasingly leveraged by developers from many different sectors, edge compute specifically helps the gaming industry deploy game servers, provision update servers, and build out other computing solutions with as little latency between the infrastructure and gamers as possible.
Edge compute’s use cases and benefits are extensive, but in summary, they help video games thrive in this hyper-competitive market by enabling them to:
Call it what you will – a traffic spike, traffic stampeding, traffic overload, a popularity spike – a sudden increase in demand for your game has a dark side, most notably in the form of player experience degradation. A game’s inability to scale – however brief – can cripple whatever success it may have otherwise had.
Though there are several ways to manage an influx of demand, such as reducing the number of interacting entities or the communication required to handle the game evolution, one of the most reliable ways is to increase the resources devoted to running the application.
To enable this process, gaming customers often turn to StackPath’s auto-scaling ability. Our edge compute solutions are designed to automate the provisioning of container and virtual machine instances, quickly maximizing the efficiency of the workload and its servers. To set it up, developers specify the maximum number of instances they want to scale to in the event of a CPU spike.
As the demand for the application increases, so do the compute resources – rapidly and automatically. Edge compute provides a logical scaling approach for handling increasing traffic that can be setup proactively with the appropriate controls already in place.
As cloud gaming becomes even more popular, availability will move even further to the forefront as a key performance indicator that all video game developers must watch closely. Downtime is essentially an easy opportunity for your competitors to grab your players’ attention – potentially indefinitely.
One way that StackPath’s gaming customers can improve their availability is to use its edge compute resources as additional servers. Beyond minimizing latency, having more locations worldwide to cache your game’s resources and process the workload will prevent server overload and even minimize delayed responses that can look like outages.
If traffic surges in one area, another PoP that is reasonably close to the gamer’s location can be configured to automatically pick up that workload without delay and prevent availability issues. In comparison, typical cloud providers would have to route the request even further out than the original location, which probably wasn’t very close to the gamer’s location from the very beginning.
To make this happen rapidly, once the necessary compute resource is identified, SP//’s Anycast capability routes traffic to and allocates resources from the edge location nearest the user, using its global edge network to its full advantage. In addition to the strong availability performance levels this enables, edge compute’s persistent storage capabilities provide the additional option to expand the overall storage space allotted to each instance of the workload, both upon creation and even later on if deemed necessary. Even after a VM or Container is shut down for any reason, the storage continues to be available and it can even to be used to reliably perform migrations and maintenance upgrades.
As an additional failsafe, should a resource become unusable for any reason, edge compute employs health monitoring in the form of liveness and readiness probes. If a server goes go down, auto-scaling, as mentioned earlier, allows the application to relaunch other instances in their place, quickly minimizing downtime, possibly even eliminating it, and enabling your game to maintain a high availability rate.
As gamers grow increasingly impatient with slow download speeds and developers look to control upfront and ongoing costs, using a CDN to deliver games has become an essential part of the video game delivery process.
However, it’s important for game developers and their publishers, and especially those providing these services to them, to transparently acknowledge that as the size of their data grows, so will the egress costs for retrieving that information from certain cloud services. As recently as 2019, even NASA failed to properly model the impact of data egress charges after signing up with AWS and migrating its records to the Amazon cloud.
To properly support the gaming industry’s robust data transfer needs, StackPath customers who use both its edge compute, CDN or WAF, and edge compute, specifically virtual machines or containers, are not charged for egress or ingress traffic from one StackPath instance or service to another. This pricing applies even if the services are not accessed from the same PoP location on SP//’s network. With this arrangement, many companies save thousands of dollars a year – protecting their revenue margins even when demand skyrockets unpredictably.
Perfect execution of the full game lifecycle – as lofty a goal as it is – can be aided by finding ways (and shamelessly yes, vendors) that allow you to put select corners of the lifecycle on a “trusted autopilot.” This approach preserves focus, energy, time, and dollars for game developers to do what they do best – develop killer games.
The examples above are just a handful of ways edge compute can help the gaming industry continue to push the limits of what’s possible. In reality, the applications are practically endless because of their inherent flexibility and power. StackPath’s virtual machines, containers, and serverless are ideal for games that require low latency, high up-time, consistency, and global scale.
Want to learn more about edge compute for video game development?
Visit SP// for Gaming
Related resources:
Why Gaming Companies Use Content Delivery Networks
How the Gaming Industry Can Use DevOps and Edge Computing
6 Technical Obstacles the Drain Video Game Revenue and Profits
[1] Fernandes, G. (2020, September 30). Half a Billion Dollars in 2020: The Cloud Gaming Market Evolves as Consumer Engagement & Spending Soar. Retrieved October 20, 2020
[2] Takahashi, Dean. “The U.S. Game Industry Has 2,457 Companies Supporting 220,000 Jobs.” VentureBeat, VentureBeat, 14 Feb. 2017