Data State
Finally, just for "fun", we see how to setup a Data state area, where we build an Aurora MySQL database in Terraform.
Resources
Here are some resources we used in the video:
File modules/vpc/main.tf
We'll be creating some subnets just for our Data state area ("database subnets"). We'll want to tags those as such, just like we wanted our public vs private subnets tagged.
1module "vpc" { 2 source = "terraform-aws-modules/vpc/aws" 3 version = "2.77.0" 4 5 # insert the 49 required variables here 6 name = "cloudcasts-${var.infra_env}-vpc" 7 cidr = var.vpc_cidr 8 9 azs = var.azs10 11 # Single NAT Gateway, see docs linked above12 enable_nat_gateway = true13 single_nat_gateway = true14 one_nat_gateway_per_az = false15 16 private_subnets = var.private_subnets17 public_subnets = var.public_subnets18 database_subnets = var.database_subnets19 20 tags = {21 Name = "cloudcasts-${var.infra_env}-vpc"22 Project = "cloudcasts.io"23 Environment = var.infra_env24 ManagedBy = "terraform"25 }26 27 # NEW ITEMS HERE:28 private_subnet_tags = {29 Role = "private"30 }31 32 public_subnet_tags = {33 Role = "public"34 }35 36 # NEW STUFF HERE37 database_subnet_tags = {38 Role = "database"39 }40}
File production/data/main.tf
We create a new module for our RDS
install, but for the sake of this writeup, I'll first show how we use the module before showing how we create it.
1terraform { 2 required_providers { 3 aws = { 4 source = "hashicorp/aws" 5 version = "3.30.0" 6 } 7 } 8 9 backend "s3" {10 profile = "cloudcasts"11 region = "us-east-2"12 }13}14 15provider "aws" {16 profile = "cloudcasts"17 region = "us-east-2"18}19 20variable "infra_env" {21 type = string22 description = "infrastructure environment"23 default = "production"24}25 26variable default_region {27 type = string28 description = "the region this infrastructure is in"29 default = "us-east-2"30}31 32variable db_user {33 type = string34 description = "the database user"35}36 37variable db_pass {38 type = string39 description = "the database password"40}41 42data "aws_vpc" "vpc" {43 tags = {44 Name = "cloudcasts-${var.infra_env}-vpc"45 Project = "cloudcasts.io"46 Environment = var.infra_env47 ManagedBy = "terraform"48 }49}50 51data "aws_subnet_ids" "database_subnets" {52 vpc_id = data.aws_vpc.vpc.id53 54 tags = {55 Name = "cloudcasts-${var.infra_env}-vpc"56 Project = "cloudcasts.io"57 Environment = var.infra_env58 ManagedBy = "terraform"59 Role = "database"60 }61}62 63module "database" {64 source = "../../modules/rds"65 66 infra_env = var.infra_env67 instance_type = "db.t3.medium"68 subnets = data.aws_subnet_ids.database_subnets.ids69 vpc_id = data.aws_vpc.vpc.id70 master_username = var.db_user71 master_password = var.db_pass72}
We use some data sources to get the vpc
and the database_subnets
, and add some new variables for the database username and password.
Otherwise this file is fairly simple! We call the new rds
module, which we'll cover below.
We also create a new file production/data/variables.auto.tfvars
. Files ending in .auto.tfvars
are loaded in automatically so we don't need to specify them when we run terraform
commands.
In our case, this file just has our username and password:
1db_user="root"2db_pass="password1234"
Let's create the new RDS module!
File modules/rds/variables.tf
We'll start with the variables we need/want to define for our (simple) use case. We'll be using the sub-module idea here again, and so we'll just expose the variables we want to get our little database working.
1variable "infra_env" { 2 description = "The infrastructure environment." 3} 4 5variable "instance_type" { 6 description = "RDS instance type and size" 7} 8 9variable "subnets" {10 type = list(string)11 description = "A list of subnets to join"12}13 14variable "vpc_id" {15 description = "The VPC to create the Aurora cluster within"16}17 18variable "master_username" {19 description = "The master username of the Aurora cluster"20}21 22variable "master_password" {23 description = "The master password of the Aurora cluster"24}
Nothing too crazy there!
File modules/vpc/main.tf
We'll be skipping any outputs.tf
file here since we won't use them here, so let's just move onto the main.tf
file and see what we need to go to get this database running.
1resource "aws_rds_cluster_parameter_group" "paramater_group" { 2 name = "cloudcasts-${var.infra_env}-pg-aurora-cluster" 3 family = "aurora-mysql5.7" 4 5 parameter { 6 name = "character_set_server" 7 value = "utf8mb4" 8 } 9 10 parameter {11 name = "character_set_client"12 value = "utf8mb4"13 }14 15 parameter {16 name = "max_allowed_packet"17 value = "1073741824"18 }19 20 tags = {21 Name = "cloudcasts ${var.infra_env} RDS Parameter Group - Aurora Cluster"22 Environment = var.infra_env23 Project = "cloudcasts.io"24 ManagedBy = "terraform"25 Type = "aurora"26 }27}28 29resource "aws_db_parameter_group" "db_parameter_group" {30 # Name is used in aws_rds_cluster::db_parameter_group_name parameter31 name = "cloudcasts-${var.infra_env}-pg-aurora"32 family = "aurora-mysql5.7"33 34 tags = {35 Name = "cloudcasts ${var.infra_env} RDS Parameter Group - Aurora"36 Environment = var.infra_env37 Project = "cloudcasts.io"38 ManagedBy = "terraform"39 Type = "aurora"40 }41}42 43# aws database subnet group44# aws rds cluster45# aws rds cluster instance46module "rds-aurora" {47 source = "terraform-aws-modules/rds-aurora/aws"48 version = "4.0.0"49 # insert the 12 required variables here50 51 name = "cloudcasts-${var.infra_env}-aurora-mysql"52 engine = "aurora-mysql"53 engine_version = "5.7.mysql_aurora.2.09.2"54 instance_type = var.instance_type55 56 vpc_id = var.vpc_id57 subnets = var.subnets58 59 replica_count = 160 61 db_parameter_group_name = aws_db_parameter_group.db_parameter_group.name62 db_cluster_parameter_group_name = aws_rds_cluster_parameter_group.paramater_group.name63 64 create_random_password = false65 username = var.master_username66 password = var.master_password67 68 tags = {69 Environment = var.infra_env70 Project = "cloudcasts.io"71 ManagedBy = "terraform"72 Type = "aurora"73 }74}
The community module creates the following for us:
- aws_db_subnet_group
- aws_rds_cluster
- 1 or more of a aws_rds_cluster_instance
We will create ourselves:
File production/network/main.tf
Lastly, we needed our VPC to create the database subnets. To do that, we'll expand on the created CIDR networks from 6 total subnets to 9. We'll give 3 to the public, 3 to the private (just as before) and the 3 new ones to the database subnets:
1# Stuff omitted: 2module "vpc" { 3 source = "../../modules/vpc" 4 5 infra_env = var.infra_env 6 vpc_cidr = "10.0.0.0/17" 7 azs = ["us-east-2a", "us-east-2b", "us-east-2c"] 8 public_subnets = slice(cidrsubnets("10.0.0.0/17", 4, 4, 4, 4, 4, 4, 4, 4, 4), 0, 3) 9 private_subnets = slice(cidrsubnets("10.0.0.0/17", 4, 4, 4, 4, 4, 4, 4, 4, 4), 3, 6)10 database_subnets = slice(cidrsubnets("10.0.0.0/17", 4, 4, 4, 4, 4, 4, 4, 4, 4), 6, 9)11}
I didn't mention this in the video as it was getting long, but the subnet configuration is a great place to use local values!
Specifically in our case, we'll calculate the 9 subnets once (instead of 3 times!) and re-use the calculated subnets:
1# Stuff omitted: 2 3locals { 4 cidr_subnets = cidrsubnets("10.0.0.0/17", 4, 4, 4, 4, 4, 4, 4, 4, 4) 5} 6 7module "vpc" { 8 source = "../../modules/vpc" 9 10 infra_env = var.infra_env11 vpc_cidr = "10.0.0.0/17"12 azs = ["us-east-2a", "us-east-2b", "us-east-2c"]13 public_subnets = slice(local.cidr_subnets, 0, 3)14 private_subnets = slice(local.cidr_subnets, 3, 6)15 database_subnets = slice(local.cidr_subnets, 6, 9)16}
That's much nicer!