The horrible things Terraform makes me do

2024-09-24

druskus

I hate Terraform with a passion. It works well enough for simple things and the entire industry has adopted it without question. HCL (the language used by Terraform) is the script of the devil and Hashicorp's business practices are vile. One thing is true though, it works. I cannot wait for the day other alternatives are mature enough, and we are getting there! (see: Pulumi).

Here are two of my favorite hacks:

NOTE: I mainly work with AWS, which means that these examples are AWS specific, however the patterns can be applied to other cloud providers.

# Applying AWS Application tags automatically

AWS has this concept of Appliations. They allow you to group resources so that they are easier to find and track in the GUI console.

They work like this: First, you create an Application, with a name. A specific tag "awsApplication" is then generated, which you can manually apply into each resource you want to group.

We could simply pass around a tags object to every resource we create, but it would be very easy to forget some, especially since not all resources allow for tagging. However, Terraform's AWS provider allows specifying default_tags, to be applied to all resources created. This is configured at the provider level.

To get around this dependency, we can declare a second provider, only used to generate the Application resource:

provider "aws" {
  // ...

  alias = "application" // <-----------
  
  default_tags {
    tags = {
      "druskus:created-by" = "terraform"
      "druskus:account" = "ACCOUNT" 
      "druskus:terraform_project" = "PROJECT_NAME"
      "druskus:repository" = "GIT_REPO_URL"
    }
  }
}

resource "aws_servicecatalogappregistry_application" "app" {
  provider    = aws.application // <-----------
  name        = "MyApplication"
  description = "My application"
}

data "aws_servicecatalogappregistry_application" "app_data" {
  provider    = aws.application // <-----------
  id = aws_servicecatalogappregistry_application.app.id
  depends_on = [aws_servicecatalogappregistry_application.app]
}

Now on our default provider, we can reference the aws_application provider like so:

provider "aws" {
  // ...

  default_tags {
    tags = merge(
      data.aws_servicecatalogappregistry_application.app_data.application_tag, // <-----------
      {
        "druskus:created-by" = "terraform"
        "druskus:account" = "ACCOUNT" 
        "druskus:terraform_project" = "PROJECT_NAME"
        "druskus:repository" = "GIT_REPO_URL"
      }
    )
  }
}

# Dynamically sourcing variables

I once had to migrate many terraform stacks to a different AWS account. The amount of hard-coded IDs was staggering. Sometimes, the IDs were in the form of tfvars, sometimes directly hard-coded in the middle of the HCL blob. Some stacks depended on each other and on previously created resources. It was not pretty. So I decided to try something else.

I am not sure I would recommend this. Do it at your own expense.

The idea is simple. Store things like subnets, regions, account IDs, etc. in AWS's "Parameter Store".

  1. Create a Terraform stack to define and populate these parameters. They should be structured sensibly, I did something like /${region}/${env}/*.
  2. Then use a Terraform module to fetch them and set them as local variables.
module "vars" {
  source = "../my-vars" // locally sourced - for simplicy here
  env  = var.env
}

# Expose whichever variables you need as locals for ease of access
locals {
  vpc = module.vars.vpc
  region = module.vars.region
  az = module.vars.az
  subnet = module.vars.private_subnet_a
  account_id = module.vars.account_id
}

Of course, make sure to set the appropriate permissions on the parameter store, especially if you are running Terraform on CI.

That's it. I'm sorry.