This tutorial is part two of a three-part series that will show you how to use Ansible and Terraform to manage edge compute resources created on StackPath more efficiently, specifically edge VMs.
In this tutorial we will use Terraform’s StackPath provider to provision an edge VM instance.
To offer an easy introduction to provisioning edge VM instances with Terraform, we will focus on provisioning a single instance in this tutorial. In part three of this series, we will use Terraform to provision multiple instances all at once.
Using Terraform for Provisioning Edge Compute Resources
Terraform is a popular infrastructure as code tool for managing infrastructure using declarative configuration files and modules. With the StackPath Terraform provider, developers can manage StackPath’s edge compute resources with Terraform.
Prerequisites
To get up and running in this part of the series you will need the following:
StackPath API credentials: To configure the StackPath Terraform provider you will require API credentials for authenticating with the StackPath API. Follow the API Quickstart Guide to create new API credentials for your StackPath account.
StackPath Stack ID: You can have one or more stacks in your StackPath account. You can get the ID of the default stack that was created for you during part one of this series here.
Install Terraform
There are several ways to install Terraform. The fastest way is to download the latest release for your operating system from the Terraform downloads page.
Unzip the downloaded file and move the binary to somewhere in your $PATH environment variable to make the binary globally available.
For macOS, you can run brew install terraform to install Terraform.
Verify the Terraform installation by running the following command.
terraform --version
Your output will look like this:
Terraform v0.12.24
If the Terraform binary is installed correctly, you will see the version of Terraform you have installed.
Now that Terraform is installed, let’s configure the StackPath provider plugin!
Setting up the StackPath Terraform provider
First create a new directory with any name of your choice. For this section of the tutorial we will use the directory name tf-stackpath.
Terraform configuration files are written in its declarative and human-readable language and end with the .tf extension. The Terraform language describes an intended goal rather than the steps to reach that goal.
Like other infrastructure as code tools, resources and configuration settings for a Terraform project can be done in a single file or separate configuration files. This allows the freedom of organizing resources in any way that works best for you.
To start, create a new file in your working directory called variables.tf which will contain variables specific to the StackPath provider. After this, enter the following:
This configuration defines three variables used when executing Terraform to configure the StackPath provider.
stackpath_stack: the ID of the stack that all resources should be created in
stackpath_client_id: the client ID for your API credentials
stackpath_client_secret: the client secret for your API credentials
Next, create a file called main.tf. This file will house the compute resources and network policies needed to get up to speed. After this, enter the following:
# Specify StackPath Provider and your access details
provider "stackpath" {
stack_id = var.stackpath_stack_id
client_id = var.stackpath_client_id
client_secret = var.stackpath_client_secret
}
# Create a new Ubuntu virtual machine workload
resource "stackpath_compute_workload" "jollofx" {
name = "Jollof X Workload"
slug = "jollofx"
# Define multiple labels on the workload VM.
labels = {
"role" = "web-server"
"environment" = "production"
}
# Define the network interface.
network_interface {
network = "default"
}
# Define an Ubuntu virtual machine
virtual_machine {
# Name that should be given to the VM
name = "app"
# StackPath image to use for the VM
image = "stackpath-edge/ubuntu-1804-bionic:v201909061930"
# Hardware resources dedicated to the VM
resources {
requests = {
# The number of CPU cores to allocate
"cpu" = "1"
# The amount of memory the VM should have
"memory" = "2Gi"
}
}
# The ports that should be publicly exposed on the VM.
port {
name = "ssh"
port = 22
protocol = "TCP"
enable_implicit_network_policy = true
}
port {
name = "http"
port = 80
protocol = "TCP"
enable_implicit_network_policy = true
}
port {
name = "https"
port = 443
protocol = "TCP"
enable_implicit_network_policy = true
}
# Cloud-init user data. Provide at least a public key
user_data = <
This main.tf configuration file contains the stackpath_compute_workload resource.
The stackpath_compute_workload can either be a virtual_machine or container.
Finally, although optional, outputs.tf is a file that tells Terraform what data we want outputted.
Add this to the file:
# Remember the name of your Edge Compute resource should replace jollofx
output "my-terraform-workload-instances" {
value = {
for instance in stackpath_compute_workload.jollofx.instances:
instance.name => instance.external_ip_address
}
}
This tells Terraform to output a map of instance names to IP addresses.
NOTE: Replace jollofx in the output.tf file with your resource name.
Cloud-config
Cloud-init is the industry standard multi-distribution method for cross-platform cloud instance initialization. It is supported across all major public cloud providers including StackPath.
Cloud-config files are special scripts designed to be run by the cloud-init process. These are generally used for initial configuration on the very first boot of a server and a popular use is to add ssh-authorized-keys.
NOTE: The ssh-authorized-keys is generated and not exactly mine.
Initialize Terraform
After the setup of the StackPath provider we need to initialize Terraform to set up the project.
The terraform init command reads through the configuration files and sets up any plugins required by your provider.
The output will be similar to this:
Initializing the backend...
Initializing provider plugins...
- Checking for available provider plugins...
- Downloading plugin for provider "stackpath" (terraform-providers/stackpath) 1.3.0...
The following providers do not have any version constraints in configuration,
so the latest version was installed.
To prevent automatic upgrades to new major versions that may contain breaking
changes, it is recommended to add version = "..." constraints to the
corresponding provider blocks in configuration, with the constraint strings
suggested below.
* provider.stackpath: version = "~> 1.3"
Terraform has been successfully initialized!
You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.
If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
With Terraform initialized, let’s create the resources!
Terraform plan and apply
The terraform plan command creates an execution plan. It is a good way to check whether the execution plan for a set of changes matches your expectations without making any changes to real resources or to the state.
Run this command:
terraform plan
You will get an output similar to this:
Refreshing Terraform state in-memory prior to plan...
The refreshed state will be used to calculate this plan, but will not be
persisted to local or remote state storage.
------------------------------------------------------------------------
An execution plan has been generated and is shown below.
Resource actions are indicated with the following symbols:
+ create
Terraform will perform the following actions:
# stackpath_compute_workload.jollofx will be created
+ resource "stackpath_compute_workload" "jollofx" {
+ annotations = (known after apply)
+ id = (known after apply)
+ labels = {
+ "environment" = "production"
+ "role" = "web-server"
}
+ name = "Jollof X Workload"
+ slug = "jollofx"
. . . . . . . . . . . . . . . . .
While terraform plan let’s you take a look at your configuration to see if it meets your expectations, terraform apply is used to apply the changes required to reach the desired state of the configuration.
Run this command:
terraform apply
You will get an output similar to terraform plan but you will be asked if you want to perform the actions. Your answer should be yes.
...
Do you want to perform these actions?
Terraform will perform the actions described above.
Only 'yes' will be accepted to approve.
Enter a value:
Upon entering yes, Terraform will create the workload and network policy.
You can head over to your StackPath customer portal to see the newly created workloads.
You should see the name and slug from the main.tf file as above.
You also have the option to turn on Remote Management if you intend to manage your instances remotely via a serial console or VNC. You can access your instances via SSH because you opened port 22 via Terraform.
The Remote Management option is turned off by default.
Three instances were created in the Workload and each have their own private and public IP Addresses.
You can now access one instance or multiple instances in your Workload via SSH and Ansible.
Conclusion
In this tutorial we learned about Terraform and also used it to create Workload instances. You can modify and update the resources with Terraform without accessing the StackPath customer portal.