Environment Directories

There's a few things I don't like about using Workspaces:

  1. Local workspaces are different from Terraform Cloud Workspaces, which is confusing at best
  2. The current workspaces isn't particularly obvious and can lead more easily to human error (wrapper scripts recommended)
  3. Terraform Cloud and automating Terraform in general gets harder/more confusing with local workspaces
  4. A lot of the community doesn't seem to use local workspaces.
  5. At least for the S3 backend, the path change is a little odd. If someone didn't know what the extra env: directory was, they might delete it by accident (which isn't just me guessing, I've heard that story before!)

Directory + Helper Scripts

There are MANY schemas you can use a directory structure to help segment environments or functional areas with Terraform. Here's one I use, based on my own experience and asking other Terraform users.

Manging environments using the following directory structure is my preferred method (even over the more advanced section I cover in the next module).

We'll use directory structure to segment environments. Similar to the local Workspaces feature, this will have a completely new state file per environment.

Our new directory structure is:

1- root
2 |- modules
3 |- vpc
4 |- ec2
5 |- production
6 |- cloudcasts.tf
7 |- variables.tfvars
8 |- staging
9 |- cloudcasts.tf
10 |- variables.tfvars
11 |- backend-production.tf
12 |- backend-staging.tf

Note that we have the backend-ENV.tf files outside of the production and staging directories on purpose - otherwise Terraform would attempt to read them as if they are resources to create. That will make more sense in a bit.

Each env file (production/cloudcasts.tf) looks very similar.

Backend Config

We omit certain items that we'll fill in else where:

1backend "s3" {
2 # Remove "bucket", "key",
3 # and for simplicity in this video, "dynamodb_table"
4 region = "us-east-2"
5 profile = "cloudcasts"
6}

Variables

We go back to the use of a variable for environment instead of locals with Workspaces:

1variable "infra_env" {
2 type = string
3 description = "infrastructure environment"
4 default = "production"
5}

Modules

We update the module path for each module block:

1module "vpc" {
2 # Modules are now up one level
3 source = "../modules/vpc"
4 # ...
5}

Backend

We have a new backend file per environment, such as backend-staging.tf that looks like this:

1bucket = "terraform-course-cloudcasts"
2key = "cloudcasts-ex/staging/terraform.tfstate"

This is the content we deleted from the S3 backend configuration block.

Running Terraform

We have a new setup, so we need to re-init our modules, etc. From the root of our project, we'll run the following command:

1# For staging
2terraform -chdir=./staging init -backend-config=../backend-staging.tf

So, we have a bit of boilerplate to run everytime we do a command against an environment. This is a good time for a helper script.

Let's make a command to do:

1./run staging init
2./run staging plan
3./run staging apply

That script will be file run:

1#!/usr/bin/env bash
2 
3TF_ENV=$1
4 
5DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" >/dev/null 2>&1 && pwd )"
6 
7# Always run from the location of this script
8cd $DIR
9 
10if [ $# -gt 0 ]; then
11 if [ "$2" == "init" ]; then
12 terraform -chdir=./$TF_ENV init -backend-config=../backend-$TF_ENV.tf
13 else
14 terraform -chdir=./$TF_ENV $2
15 fi
16fi
17 
18# Head back to original location to avoid surprises
19cd -

Then use it:

1chmod +x run
2 
3./run staging init
4./run staging plan
5./run staging apply

Don't miss out

Sign up to learn when new content is released! Courses are in production now.