Environment Directories
There's a few things I don't like about using Workspaces:
- Local workspaces are different from Terraform Cloud Workspaces, which is confusing at best
- The current workspaces isn't particularly obvious and can lead more easily to human error (wrapper scripts recommended)
- Terraform Cloud and automating Terraform in general gets harder/more confusing with local workspaces
- A lot of the community doesn't seem to use local workspaces.
- At least for the S3 backend, the path change is a little odd. If someone didn't know what the extra
env:
directory was, they might delete it by accident (which isn't just me guessing, I've heard that story before!)
Directory + Helper Scripts
There are MANY schemas you can use a directory structure to help segment environments or functional areas with Terraform. Here's one I use, based on my own experience and asking other Terraform users.
Manging environments using the following directory structure is my preferred method (even over the more advanced section I cover in the next module).
We'll use directory structure to segment environments. Similar to the local Workspaces feature, this will have a completely new state file per environment.
Our new directory structure is:
1- root 2 |- modules 3 |- vpc 4 |- ec2 5 |- production 6 |- cloudcasts.tf 7 |- variables.tfvars 8 |- staging 9 |- cloudcasts.tf10 |- variables.tfvars11 |- backend-production.tf12 |- backend-staging.tf
Note that we have the
backend-ENV.tf
files outside of theproduction
andstaging
directories on purpose - otherwise Terraform would attempt to read them as if they are resources to create. That will make more sense in a bit.
Each env file (production/cloudcasts.tf
) looks very similar.
Backend Config
We omit certain items that we'll fill in else where:
1backend "s3" {2 # Remove "bucket", "key",3 # and for simplicity in this video, "dynamodb_table"4 region = "us-east-2"5 profile = "cloudcasts"6}
Variables
We go back to the use of a variable
for environment instead of locals
with Workspaces:
1variable "infra_env" {2 type = string3 description = "infrastructure environment"4 default = "production"5}
Modules
We update the module path for each module
block:
1module "vpc" {2 # Modules are now up one level3 source = "../modules/vpc"4 # ...5}
Backend
We have a new backend file per environment, such as backend-staging.tf
that looks like this:
1bucket = "terraform-course-cloudcasts"2key = "cloudcasts-ex/staging/terraform.tfstate"
This is the content we deleted from the S3 backend configuration block.
Running Terraform
We have a new setup, so we need to re-init our modules, etc. From the root of our project, we'll run the following command:
1# For staging2terraform -chdir=./staging init -backend-config=../backend-staging.tf
So, we have a bit of boilerplate to run everytime we do a command against an environment. This is a good time for a helper script.
Let's make a command to do:
1./run staging init2./run staging plan3./run staging apply
That script will be file run
:
1#!/usr/bin/env bash 2 3TF_ENV=$1 4 5DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )" 6 7# Always run from the location of this script 8cd $DIR 9 10if [ $# -gt 0 ]; then11 if [ "$2" == "init" ]; then12 terraform -chdir=./$TF_ENV init -backend-config=../backend-$TF_ENV.tf13 else14 terraform -chdir=./$TF_ENV $215 fi16fi17 18# Head back to original location to avoid surprises19cd -
Then use it:
1chmod +x run2 3./run staging init4./run staging plan5./run staging apply