Skip to content

Project Terraform: Part 1 – Building Great Modules

terraform logo

I’ve been using Terraform for a few weeks now and so far I am very impressed with how Terraform works. I have been converting my previous infrastructure from CDK to Terraform, but I started to find that my project folder was starting to feel a bit cluttered. My original plan was to have a single git repo per AWS account with separate files for each service like so:


However I was finding with this layout that each file was getting crowded and not very organised. For example, the file was crowded with all my domains and records.

I went in search of a better layout for my project. And I think I’ve found it.

A more modular approach to Terraform

I’ve created a few Terraform modules as part of my migration, however, I didn’t realise that this same modular approach could be applied to my code base for my accounts. Lets take a look at the layout I went for:

└── modules
     └── blog_systemsmystery_tech
     |     |──
     |     └──
     └── dns
     |    |──
     |    └──
     └── ses

In this example, I have the usual and, but this time, I’ve created a modules folder with three sub modules for this blog, any DNS records I need and my SES configuration. For the rest of this blog, I am going to look over how I have organised my blog module.

Breaking it down

Within my module I have the following structure:


Ill break down what each of these files is used for.

I use the any of the IAM related items I need. Usually, these are any of the build-in modules that start aws_iam_. For example, I use SES to offload any emails that I need to send via my blog, so I require an IAM user, access keys and policy to allow my blog access to the SES service. First, I’ve created a IAM user:

resource "aws_iam_user" "wordpress_ses_user" {
  name = "blog_ses_user"

Then created access tokens using keybase which provides me with an encrypted secret:

resource "aws_iam_access_key" "wordpress_ses_user_access_key" {
  user    =
  pgp_key = "keybase:a_keybase_user"

And finally attached the AWS managed AmazonSESFullAccess policy to the user:

resource "aws_iam_policy_attachment" "wordpress_ses_policy_attachment" {
  name       = "wordpress_ses_policy_attachment"
  users      = []
  policy_arn = "arn:aws:iam::aws:policy/AmazonSESFullAccess"

Now that I have my IAM user setup, I need to get the output of the access keys. To do this, I use the file and define the outputs for the access key and encrypted secret:

output "wordpress_ses_user_iam_access_key" {
  value       =
  description = "Wordpress SES user access key ID."

output "wordpress_ses_user_iam_encrypted_secret" {
  value       = aws_iam_access_key.wordpress_ses_user_access_key.encrypted_secret
  description = "Wordpress SES user encrypted secret."

As you can see, because all of these files are contained within one module, I can reference anything I defined in the file (or any other file) and use them to create the output. In this example, I reference the aws_iam_access_key.wordpress_ses_user_access_key object and then use id and encrypted_secret to get my required values.

The next file I have contains all my Lightsail infrastructure. Terraforms ability to create Lightsail instances was one of my key drivers when deciding to move to it. We start off creating a Lightsail instance:

resource "aws_lightsail_instance" "wordpress_instance" {
  name              = "wordpress"
  availability_zone = "eu-west-2b"
  blueprint_id      = "wordpress"
  bundle_id         = "nano_2_0"

Two things to note when creating this instance are firstly, the blueprint_id and secondly the bundle_id.

To retrieve the blueprint_id you run aws lightsail get-blueprints from the command line. This will return all the current blueprints which Lightsail supports. As I was wanting to use the WordPress blueprint, I look through the list till I found:


From here I take the blueprintId and use this within my instance object.

The bundle_id is used to select the size of instance that you wish to use. For me this was the nano instance type or nano_2_0 in Terraform speak. You can find out more about the format this value needs to be by checking out the Terraform documentation for the aws_lightsail_instance resource.

The final piece of the puzzle is the static IP address which I want to assign to my instance. To do this I firstly created and named a static IP and then attached it to the instance by referencing the object created above:

resource "aws_lightsail_static_ip" "wordpress_static_ip" {
  name = "blog_systemsmystery_tech"

resource "aws_lightsail_static_ip_attachment" "wordpress_static_ip_attachment" {
  static_ip_name =
  instance_name  =

I decided to have a separate file for any modules which I import into this project. For example, within WordPress I am going to be using a backup plugin which will save all my backups to S3. After my post about Home Assistant backups, I decided to write my own module which would create all the components to make this happen. You can find my module published in the Terraform registry if this is something that will help you as well. Add this module to my project, I created a new module resource and configured as needed:

module "wordpress_backup_bucket" {
  source       = "rj175/s3-backup-bucket/aws"
  version      = "1.0.1"
  pgp_key      = "keybase:a_keybase_user"
  service_name = replace(var.blog_domain, ".", "-")

What you will notice is a reference to var.blog_domain . This is a variable which is passed to this resource. These variables can be found in my file, which I will go into later.

The final infrastructure component that I needed to create was a DNS record in Route53. This refers back to the static IP address defined in the and uses another variable for the zone ID which will be used to create this record:

resource "aws_route53_record" "wordpress_a_record" {
  name    = "blog"
  zone_id = var.zone_id
  records = [
  type = "A"
  ttl  = 300

Within my file, I store all my Terraform related configuration:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "3.18.0"

Here, I am defining what providers are required (AWS for this project) and what version I want to use. I pin my version as to avoid any issues when the provider is updated by HashiCorp.

Within the file, I also define any variables that I require. As we seen above, the only variables I require are for the service name of the backup bucket and the zone ID:

variable "blog_domain" {
  description = "Domain name to use for blog."
  type        = string
variable "zone_id" {
  description = "Zone ID to create DNS records in."
  type        = string

You’ll notice that these variables don’t have a value. This will be defined at the top level of the project outside of this module.


To sum up, I have now created a module locally for this project which creates all the components required for this blog. In the next part, I will show you how I use this module within my main project making it ready to deploy into production.

Till next time, happy coding!

Published inTerraform