How StackPath Was Founded and Built
Last year was a big year for StackPath. Following the release of serverless scripting at the Edge, we released edge containers and VMs to support applications that have low latency and high bandwidth compute requirements for supporting their end users.
And we’re going to surpass the high bar we set last year. Not only because we’re speeding up development in 2020, but it’s only the beginning of the second quarter and we’ve already closed our Series B funding through strategic partnerships with Juniper and Cox, announced a collaboration with Broadcom, and will announce even more strategic partnerships with other companies that want to deliver their services on our distributed edge platform.
Leading up to this point as CTO, an interesting comment I’ve been receiving from companies is: I don’t know of StackPath.
True, we are less than five years old, still considered by many as a startup. But even when I explain our pedigree, approach, capabilities, and where we’re going, the follow-up question I get is: So with all those capabilities and your scale, why do I still not know of StackPath?
Part of the B round funding will help us ramp up our sales, marketing, and product teams to get our name and products more of the attention they deserve. But, in addition to that, we decided to start this blog series to help you get a better idea of who we are and where we’re going.
To do this, we’ll start at the beginning.
How StackPath was Founded
StackPath was founded and run by many of the same players who founded and ran SoftLayer, a cloud services provider acquired by IBM in 2013. I was part of the SoftLayer team leading data center (DC) engineering from the early days through to the IBM acquisition and was recruited to StackPath after it was founded to help the company scale.
At SoftLayer, we noticed that the workloads customers were deploying were evolving in a way where bandwidth and latency were becoming more critical to end users.
Building hyperscale data centers was very expensive and we were constantly focused on driving down unit costs. The easiest way to drive down DC costs is to move data centers further away from end users to the middle of large, unpopulated areas. But, contrary to the needed solution, this distance creates higher latency and lower bandwidth/throughput.
Cloud vs edge workloads
Standard cloud workloads centralize computing, storage, and end-user delivery in a limited number of larger-scale public cloud, collocated, or on-premise data centers. These data centers require significant capabilities and costs to distribute data and manage the connectivity between networked locations.
This is where content delivery networks (CDNs) would’ve played a role in distributing content globally but some of these customers had workloads that weren’t supported by CDNs for various reasons (e.g., non-http content, advanced functionality on top of a CDN, or the need for running custom code in a distributed way).
This led the team to start StackPath with the vision of building a fully distributed edge cloud.
Edge-optimized workloads share computing, storage, and delivery responsibilities between the “origin” data centers and a network of lighter-weight edge data centers (i.e. points of presence) located closer to end users. This allows significant and unique workload optimization, including higher security options, reduced bandwidth costs, accelerated processing time, and superior end user experiences.
It’s important to note that the edge is not replacing the public cloud. Rather, it is complementing both public cloud and private data centers.
Also, not all workloads were made for the edge. If the application is not sensitive to end user latency or bandwidth, it’s probably best deployed in the public cloud. The edge requires software to be designed or re-architected to run in a distributed way. This means the application should be lighter and distributed horizontally across many edge sites versus an application that requires horizontal scale and only requires a single data center.
That said, much like how there was an evolutionary shift of on-premise workloads to public cloud, engineering teams will start moving many of their workloads to the edge.
How StackPath Was Built
Having come from a hyperscale cloud we understood how to build compute. But the key to the edge is connectivity which means having connectivity to all of the ISPs and telcos, or “eyeball networks.” This is what we would need to build the company we envisioned.
Acquiring the pieces
We took an interesting approach as a startup. We thought long and hard about whether it would be better to build it or buy it and spent the early days researching and acquiring a strategic set of services. These services would allow us to accelerate our footprint and scale while giving us access to the intellectual property and teams from those acquisitions to deliver a leading edge platform.
Without this approach, building an edge cloud becomes a chicken and egg problem. Without the services, it’s hard to justify the CAPEX of the hardware build; and without the hardware build, it’s hard to have performant edge services.
Building compute infrastructure globally at scale requires a large upfront CAPEX investment which is a challenge of its own. And getting connectivity to all of the eyeball networks? That’s not easy to do nor can it be done quickly. The carriers that have connectivity to end users go through a very methodical planning process to manage capacity and can’t just allocate hundreds of gigabits of capacity to anyone who asks because it’s also a large upfront investment for them.
Doing all of this as a startup and building the services from the ground up would have taken roughly 5 to 10 years to scale. But by having strategic acquisitions run as services on our edge platform we could easily justify the scale and bandwidth.
Building the foundation
By 2018 we had successfully merged six companies into one and were able to start turning our disperse global footprint of legacy infrastructure into a single hyperscale edge platform.
Here are some highlights of how we did this:
- We leveraged our knowledge of hyperscale design to build a resilient and redundant platform architecture that is very cost effective. We referred to this project internally as “the foundation.”
- We rebuilt every network device to support Nx 25 gig to server and 100 gig interconnects.
- We deployed new server infrastructure and retrofit legacy infrastructure to leverage hardware offload capabilities via SmartNICs and crypto engines to deliver high performance workloads.
- We made every acquisition and service the first customer of our edge platform.
We completed this work at the end of 2018 but performed it in a way that wasn’t disruptive to our existing customer base. This is the equivalent of converting a train into an airplane without pulling into the station or killing the passengers, and I cannot express how proud I am of the teams that put in very long hours to make this happen.
During this time, we also launched new services like our serverless offering. It truly took a village to make all of this happen.
StackPath Makes Its Debut
After completing the “foundation,” we now had a fully distributed edge platform and launched our edge compute VM and container offering publicly in Q1 2019. Only a year ago!
This meant we were able to offer the following edge compute infrastructure and services:
- Virtual Machines
- Serverless Scripting
- Object Storage
These are now globally available and, while all of the teams were hard at work rebuilding the infrastructure, we had our software engineering teams hard at work refactoring our control plane to deliver all of these services. This control plane is what we refer to as our EdgeEngine and is our secret sauce that understands all of these SaaS/PaaS/IaaS services and our global footprint.
Some key features of EdgeEngine include:
- Origin agnostic: it enables secure, seamless integration of services from any public/private cloud or data center, providing unprecedented opportunities for optimizing operations and innovative new use cases.
- Future proof modality: it’s designed to be technology agnostic, ready to accommodate new hardware and software technologies without requiring radical, platform-wide redesign/redeployment to add new capabilities and services.
- Hyperscale: it’s capable of efficiently scaling from a few to thousands of servers, storage, and networking devices deployed across multiple locations yet still managed in a centralized, automated fashion.
- Inherent security: it’s built on a security-first design imbuing first- and third-party services with inherent security advantages including unparalleled DDoS attack mitigation and the ability to customize security profiles.
All of these capabilities are delivered via a single API and customer portal that enable anyone to build and deploy their own solutions at the edge.
So when people asked Why don’t I know of StackPath?, it’s because we’ve effectively been building the leading edge compute platform in stealth mode. But that’s about to change.
What’s Next for StackPath
As discussed in our B round post, we have chosen some strategic investors that will help us dive into our next phase that we internally refer to as the “Deep Edge Thesis.”
If you want to learn what the “Deep Edge” is, I will dive into that over the course of this blog series. But, for now, I’ll say that the term “edge” is relative.
We are currently in the Internet Exchange (IX)/Enterprise Edge in facilities with Digital Realty/Interxion, Equinix, Telx, Coresite, and other exchanges.
While there are definitely some interesting use cases that call for running a workload on an IoT device like a light bulb, we consider that the End User Edge and there are other great companies focused on solving for those use cases. But, for StackPath, our next phase is in the Telco/Cable Edge where we will help enable next generation use cases that will have fundamental and transformational impact on the way people connect and use the Internet, so stay tuned for more!