Terraform Workspaces
We'll keep one AWS account for multiple environments, and make use of workspaces as the first way we'll show managing multiple environments.
Note that, locally, a
workspace
is different from a workspace in Terraform Cloud. In TF Cloud, a workspace might be an entirely different set of infrastructure and is likely a separate GitHub repository.
When using workspaces locally, they're basically differently named sets of the same infrastructure (sounds a lot like "environments"!).
From the Workspaces docs:
Named workspaces allow conveniently switching between multiple instances of a single configuration within its single backend. They are convenient in a number of situations, but cannot solve all problems.
Terraform is always on the default
workspace, but we can create a new one. We'll call it "staging":
1terraform workspace list2terraform workspace help3 4terraform workspace new staging5terraform workspace show # staging
Workspace and Variables
The current workspace can be retrieved using the variable terraform.workspace
. In our case, this replaces the infra_env
variable, so let's go ahead and see what that looks like:
1# Delete/Comment out the variable infra_env 2#variable "infra_env" { 3# type = string 4# description = "infrastructure environment" 5#} 6 7# Use a "local" variable to determine our infra_env 8locals { 9 infra_env = terraform.workspace10 11 # Or, optionally:12 # infra_env = terraform.workspace == "default" ? "dev" : terraform.workspace13}
We delete/comment out the variable "infra_env" {}
declaration, and then we use a locals
block to create a local variable of the same name infra_env
, and assign it the value of the current workspace.
Then we can replace instances of var.infra_env
with the local variable we created:
1# Replace all instances of var.infra_env with local.infra_env2infra_env = local.infra_env
After these changes, we can run a terraform plan
and ensure it does not want us to change anything (we're still using the same environment name "staging").
Note that we used a "local" variable, which is handy for parsing out a value once and re-using the final value multiple times.
Duplicating Infrastructure
To duplicate our infrastructure, we can create a new workspace, and then re-run plan
/apply
to see it creating an new, duplicate infrastructure!
1terraform workspace new production2terraform plan3terraform apply
In our state files, we can see what happened here. Essentially each workspace is nothing more than a new state file with the workspace name included in it.
For the s3
backend, we get a S3 "key" (filepath) like the following per (non-default) workspace: s3://cloudcasts-terraform-ex/env:/staging/cloudcasts.ex
.
This path name is a bit odd to my eyes (note the use of env:
), but it's a convention used by the S3 backend, as documented here.