By: Bill O'Neill (@woneill)
Edited by: Kerim Satirli (@ksatirli)
Terraform is "Infrastructure as Code" and like all code, it is beneficial to review and refactor to:
- improve code readability and reduce complexity
- improve the maintainability of the source code
- create a simpler, cleaner, and more expressive internal architecture or object model to improve extensibility
This article outlines the approaches that have helped my teams when refactoring Terraform code bases.
Convert modules to independent Git repositories
If your Terraform Git repository has grown organically, you will likely have a monorepo structure complete with embedded modules, similar to this:
$ tree terraform-monorepo/ . ├── README.md ├── main.tf ├── variables.tf ├── outputs.tf ├── ... ├── modules/ │ ├── moduleA/ │ │ ├── README.md │ │ ├── variables.tf │ │ ├── main.tf │ │ ├── outputs.tf │ ├── moduleB/ │ ├── .../
Encapsulating resources within modules is a great step, but the monorepo structure makes it difficult to iterate on individual module development, down the line.
Splitting the modules into independent Git repositories will:
- Enable module development in an isolated manner
- Support re-use of module logic in other Terraform code bases, across your organization
- Enable publishing to public and private Terraform Registries
Here's a process that you can follow to make a module a stand-alone Git
repository while preserving the historical log messages. The steps are examples
of how to extract moduleA
from the above file tree into its own git
repository.
- Clone the Terraform Git repository to a new directory. I recommend naming
the directory after the module you plan on converting.
git clone <REMOTE_URL> moduleA
- Change into the new directory:
cd moduleA
- Use
git filter-branch
to split out the module into a new repository..FILTER_BRANCH_SQUELCH_WARNING=1 git filter-branch --subdirectory-filter modules/moduleA -- --all
Note that we're squelching the warning about
filter-branch
. See the filter-branch manual page for more details if you're interested - Now your directory will only contain the contents of the module itself,
while still having access to the full Git history.
You can rungit log
to confirm this. - Create a new Git repository and obtain the remote URL for it, then update the
origin
in the filtered repository:
git remote set-url origin <NEW_REMOTE_URL> git push -u origin main
-
Tag the repo as
v1.0.0
before making any changes
git tag v1.0.0 git push --tags
-
Now that the new repository is ready to be used, update the existing references to the module to use a
source
argument that points to the tag that you just created.The “Generic Git Repository” section in Terraform's Module Sources documentation has more details on the format.
Replace lines such as
source = "../modules/moduleA"
with
source = "git::<NEW_REMOTE_URL>?ref=v1.0.0"
- Alternatively, publishing your module to a Terraform registry is an option (but this is outside the scope of this article).
- Once all
source
arguments that previously pointed to the directory path have been replaced with references to Git repositories or Terraform registry references, delete the directory-based module in the original Terraform repository.
Update version constraints with tfupdate
Masayuki Morita's tfupdate
utility can be
used to recursively update version constraints of Terraform core, providers, and
modules.
As you start refactoring modules and bumping
their version tags, tfupdate
becomes an invaluable tool to ensure
all references have been updated.
Some examples of tfupdate
usage, assuming the current directory is
to be updated:
- Updating the version of Terraform core:
tfupdate terraform --version 1.0.11 --recursive .
- Updating the version of the Google Terraform
provider:
tfupdate provider google --version 4.3.0 --recursive .
- Updating the version references of Git-based module sources can be done with
the module subcommand, for example:
tfupdate module git::<REMOTE_URL> --version 1.0.1 --recursive .
Test state migrations with tfmigrate
Many Terraform users are hesitant to refactor their code base, since changes can require updates to the state configuration. Manually updating the state in a safe way involves duplicating the state, updating it locally, then copying it back in place.
In addition to tfupdate
, Masayuki Morita has another excellent
utility that can be used to apply Terraform state operations in a declarative
way while validating the changes, before committing them: tfmigrate
You can do a dry run migration where you simulate state operations with a
temporary local state file and check to see if terraform plan
has
no changes after the migration., This workflow is safe and non-disruptive, as it
does not actually update the remote state.
If the dry run migration looks good, you can use tfmigrate
to apply
the state operations in a single transaction instead of multiple, individual
changes.
Migrations are written in HCL and use the following format:
migration "state" "test" { dir = "." actions = [ "mv google_storage_backup.stage-backups google_storage_backup.stage_backups", "mv google_storage_backup.prod-backups google_storage_backup.prod_backups", ] }
Each action line is functionally identical to the command you’d run
manually such as terraform state <action> …
. A full list of
possible actions is available on the tfmigrate
website.
Quoting resources that have indexed keys can be tricky. The best approach appears to be using a single quote around the entire resource and then escaping the double quotes in the index. For example:
actions = [ "mv docker_container.nginx 'docker_container.nginx[\"This is an example\"]'", ]
Testing the state migrations can be done via tfmigrate plan
<filename>
. The output will show you what terraform plan
would look like if you had actually carried out the state changes.
Applying the migration to the actual state is done via terraform apply
<filename>
. Note that by default, it will only apply the changes if the
result from tfmigrate plan
was a clean output.
If you still want to apply changes to a “dirty” state, you
can do so by adding a force = true
line to the migration file.
If you are using Terraform 1.1 or newer, there is now a built-in
moved
statement that works similarly to these approaches. I haven’t tested it out yet but it looks like a useful feature! I can see it being especially useful for users who may not have direct access to state files such as Terraform Cloud and Enterprise users or Atlantis users.See the announcement in the 1.1 release as well the HashiCorp Learn tutorial for more details.
Ensure standards compliance with TFLint
According to its website, TFLint is a Terraform linter with a handful of key features:
- Finding possible errors (like illegal instance types) for major Cloud providers (AWS/Azure/GCP)
- Warning about deprecated syntax and unused declarations
- Enforcing best practices and naming conventions
TFLint has a plugin system for including cloud provider-specific linting rules as well as updated Terraform rules. Setting up the list of rules can be done on the command line but it is recommended to use a config file to manage the extensive list of rules to apply to your codebase.
Here is a configuration file that enables all of the possible terraform rules as
well as includes AWS specific rules. Save it in the root of your Git repository
as .tflint.hcl then initialize TFLint by running tflint –init
. Now
you can lint your codebase by running tflint
config { module = false disabled_by_default = true } plugin "aws" { enabled = true version = "0.10.1" source = "github.com/terraform-linters/tflint-ruleset-aws" } rule "terraform_comment_syntax" { enabled = true } rule "terraform_deprecated_index" { enabled = true } rule "terraform_deprecated_interpolation" { enabled = true } rule "terraform_documented_outputs" { enabled = true } rule "terraform_documented_variables" { enabled = true } rule "terraform_module_pinned_source" { enabled = true } rule "terraform_module_version" { enabled = true exact = false # default } rule "terraform_naming_convention" { enabled = true } rule "terraform_required_providers" { enabled = true } rule "terraform_required_version" { enabled = true } rule "terraform_standard_module_structure" { enabled = true } rule "terraform_typed_variables" { enabled = true } rule "terraform_unused_declarations" { enabled = true } rule "terraform_unused_required_providers" { enabled = true } rule "terraform_workspace_remote" { enabled = true }
pre-commit
Setting up git hooks with the pre-commit framework allows you to automatically run TFLint, as well as many other Terraform code checks, prior to any commit.
Here is a sample .pre-commit-config.yaml
that combines Anton
Babenko's excellent collection of Terraform specific hooks with some
out-of-the-box hooks for pre-commit. It ensures that your Terraform commits are:
- Following the canonical format and style per
terraform fmt
- Syntactically valid and internally consistent per
terraform validate
- Passing TFLint rules
- Ensuring that good practices are followed such as:
- merge conflicts are resolved
- private ssh keys aren't included
- commits are done to a branch instead of directly to
master
ormain
repos: - repo: git://github.com/antonbabenko/pre-commit-terraform rev: v1.59.0 hooks: - id: terraform_fmt - id: terraform_validate - id: terraform_tflint args: - '--args=--config=__GIT_WORKING_DIR__/.tflint.hcl' - repo: git://github.com/pre-commit/pre-commit-hooks rev: v4.0.1 hooks: - id: check-added-large-files - id: check-merge-conflict - id: check-vcs-permalinks - id: check-yaml - id: detect-private-key - id: end-of-file-fixer - id: no-commit-to-branch - id: trailing-whitespace
You can take advantage of this configuration by:
- Installing the pre-commit framework per the instructions on the website.
- Creating the above configuration in the root directory of your Git repository as .pre-commit-config.yaml
- Creating a .tflint.hcl in the base directory of the repository
- Initialize the pre-commit hooks by running
pre-commit install
Now whenever you create a commit, the hooks will run against any changed files and report back issues.
Since the pre-commit framework normally only runs against changed files,
it’s a good idea to start off by validating all files in the repository by
running pre-commit run –all-files
Conclusion
These approaches help make it easier and safer to refactor Terraform codebases, speeding up a team's "Infrastructure as Code" velocity.
This helped my team gain confidence in making changes to our legacy modules and enabled greater reusability. Standardizing on formatting and validation checks also sped up code reviews. We could focus on module logic instead of looking for typos or broken syntax
No comments :
Post a Comment