Skip to content

v0.3.0-rc1

Pre-release
Pre-release
Compare
Choose a tag to compare
@dzhlobo dzhlobo released this 18 Jan 00:40
· 3 commits to main since this release

Migration from 0.2.x

This version contains major changes that requires you to be extra careful in order not to recreate resources in the cloud you don't want to recreate. Some of the step might make your services unavailable due to a disruption, please be careful about it.

  1. Update version of module:
module "eks" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.3.0"
}
  1. Use separate module to create ECR repositories.
 module "eks" {
   source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.3.0"
 
-  ecr_repositories = ["api", "web"]
 }
module "ecr" {
  source = "[email protected]:datarockets/infrastructure.git//aws/ecr?ref=v0.3.0"

  app = var.app
  environment = var.environment
  repositories = ["api", "web"]
}
  1. Use separate module to create cicd user:
module "eks_cicd_auth" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks/auth/cicd?ref=v0.3.0"

  app = var.app
  environment = var.environment
  cluster_arn = module.eks.cluster_arn
  ecr_repository_arns = module.ecr.repository_arns
  kubernetes_app_namespace = module.kubernetes.app_namespace
}
  1. Use separate module to manage aws-auth ConfigMap since eks module no longer does it:
 module "eks" {
   source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.3.0"
 
-  masters_aws_groups = ["Ops"]
 }
module "eks_aws_auth" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks/auth/aws?ref=v0.3.0"

  app = var.app
  environment = var.environment

  node_group_iam_role_arn = module.eks.eks_managed_node_group_default_iam_role_arn
  masters_aws_groups = ["Ops"]
  users = [
    {
      userarn = module.eks_cicd_auth.iam_user.arn
      username = module.eks_cicd_auth.iam_user.name
      groups = [module.eks_cicd_auth.kubernetes_group]
    }
  ]
}
  1. Update kubernetes module to latest version:
module "kubernetes" {
  source = "[email protected]:datarockets/infrastructure.git//k8s/basic?ref=v0.3.0"
}
  1. If you used to create kubernetes app namespace in eks module we no longer need it:
  module "kubernetes" {
    source = "[email protected]:datarockets/infrastructure.git//k8s/basic?ref=v0.3.0"
  
-   create_app_namespace = false
-   app_namespace = module.eks.app_namespace
  }
  1. If you use snippets in ingress configuration, you should enable them when installing nginx-ingress helm chart:
module "kubernetes" {

  nginx_ingress_helm_chart_options = [
    {
      name = "controller.enableSnippets"
      value = true
    }
  ]
}
  1. Remove kubernetes-alpha provider entirely.

  2. Update database module:

  module "database" {
-   source = "[email protected]:datarockets/infrastructure.git//aws/postgresql?ref=v0.2.2"
+   source = "[email protected]:datarockets/infrastructure.git//aws/postgresql?ref=v0.3.0"

    app = var.app
    environment = var.environment
    region = var.region
    vpc_id = module.eks.vpc_id
-   eks_private_subnets_cidr_blocks = module.eks.private_cidr_blocks
+   allow_security_group_ids = [module.eks.node_security_group_id]
    database_subnets = {
      "10.0.21.0/24" = "ca-central-1a"
      "10.0.22.0/24" = "ca-central-1b"
    }
  }
  1. Update redis module as above.

  2. Run terraform init -upgrade to download newer versions of modules and providers.

  3. Run terraform validate and fix errors. They will mostly be related to output variables moved from one module to another. E.g. module.eks.app_namespace is no longer available and you should use module.kubernetes.app_namespace.

Not a completed list of changed output variables:

  • module.eks.app_namespace -> module.kubernetes.app_namespace
  • module.eks.cicd_key_id -> module.eks_cicd_auth.iam_user_key_id
  • module.eks.cicd_key_secret -> module.eks_cicd_auth.iam_user_key_secret
  • module.eks.ecr_repository_urls -> module.ecr. repository_urls
  1. Move resources from eks to ecr module.

List ECR-related resources terraform state list | grep module.eks.aws_ecr_repository:

module.eks.aws_ecr_repository.ecr_repository["api"]
module.eks.aws_ecr_repository.ecr_repository["web"]

terraform state list | grep module.eks.aws_ecr_lifecycle_policy:

module.eks.aws_ecr_lifecycle_policy.keep_last_10["api"]
module.eks.aws_ecr_lifecycle_policy.keep_last_10["web"]

Move:

terraform state mv 'module.eks.aws_ecr_repository.ecr_repository["api"]' 'module.ecr.aws_ecr_repository.repository["api"]'
terraform state mv 'module.eks.aws_ecr_repository.ecr_repository["web"]' 'module.ecr.aws_ecr_repository.repository["web"]'
terraform state mv 'module.eks.aws_ecr_lifecycle_policy.keep_last_10["api"]' 'module.ecr.aws_ecr_lifecycle_policy.keep_last_10["api"]'
terraform state mv 'module.eks.aws_ecr_lifecycle_policy.keep_last_10["web"]' 'module.ecr.aws_ecr_lifecycle_policy.keep_last_10["web"]'
  1. Move resources from eks to eks_cicd_auth module.

terraform state list | grep "module\.eks\..*\.cicd":

module.eks.aws_iam_access_key.cicd
module.eks.aws_iam_policy.cicd
module.eks.aws_iam_user.cicd
module.eks.aws_iam_user_policy_attachment.cicd
module.eks.kubernetes_role.cicd
module.eks.kubernetes_role_binding.cicd

Move:

terraform state mv module.eks.aws_iam_access_key.cicd module.eks_cicd_auth.aws_iam_access_key.cicd
terraform state mv module.eks.aws_iam_policy.cicd module.eks_cicd_auth.aws_iam_policy.cicd
terraform state mv module.eks.aws_iam_user.cicd module.eks_cicd_auth.aws_iam_user.cicd
terraform state mv module.eks.aws_iam_user_policy_attachment.cicd module.eks_cicd_auth.aws_iam_user_policy_attachment.cicd
terraform state mv module.eks.kubernetes_role.cicd module.eks_cicd_auth.kubernetes_role.cicd
terraform state mv module.eks.kubernetes_role_binding.cicd module.eks_cicd_auth.kubernetes_role_binding.cicd
  1. Move ConfigMap resource previously managed by eks module internals to eks_aws_auth:

terraform state list | grep aws_auth:

module.eks.module.eks.kubernetes_config_map.aws_auth[0]
terraform state mv 'module.eks.module.eks.kubernetes_config_map.aws_auth[0]' 'module.eks_aws_auth.kubernetes_config_map.aws_auth'
  1. Move app namespace kubernetes resource from eks to kubernetes module.

terraform state list | grep module.eks.kubernetes_namespace:

module.eks.kubernetes_namespace.app
terraform state mv module.eks.kubernetes_namespace.app module.kubernetes.kubernetes_namespace.app
  1. Move EKS cluster IAM role and some policies from one internal resource to another in order to avoid recreation:
terraform state mv 'module.eks.module.eks.aws_iam_role.cluster[0]' 'module.eks.module.eks.aws_iam_role.this[0]'
terraform state mv 'module.eks.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSClusterPolicy[0]' 'module.eks.module.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSClusterPolicy"]'
terraform state mv 'module.eks.module.eks.aws_iam_role_policy_attachment.cluster_AmazonEKSVPCResourceControllerPolicy[0]' 'module.eks.module.eks.aws_iam_role_policy_attachment.this["arn:aws:iam::aws:policy/AmazonEKSVPCResourceController"]'
terraform state mv 'module.eks.module.eks.module.node_groups.aws_eks_node_group.workers["default"]' 'module.eks.module.eks.module.eks_managed_node_group["default"].aws_eks_node_group.this[0]'
  1. Extract cluster's legacy role name and specify it as argument for eks module.

Run plan for eks module terraform plan -target module.eks. There will be a lot of changes including EKS cluster recreation. You might notice that cluster is recreated because cluster's IAM role is recreated and IAM role is recreated because its name is changed. We can't edit role's name in place:

  # module.eks.module.eks.aws_iam_role.this[0] must be replaced
+/- resource "aws_iam_role" "this" {
      ~ arn                   = "arn:aws:iam::123456789012:role/app-staging20210526165212002100000002" -> (known after apply)
      ~ assume_role_policy    = jsonencode( # whitespace changes
            {
                Statement = [
                    {
                        Action    = "sts:AssumeRole"
                        Effect    = "Allow"
                        Principal = {
                            Service = "eks.amazonaws.com"
                        }
                        Sid       = "EKSClusterAssumeRole"
                    },
                ]
                Version   = "2012-10-17"
            }
        )
      ~ create_date           = "2021-05-26T16:52:25Z" -> (known after apply)
      ~ id                    = "app-staging20210526165212002100000002" -> (known after apply)
      ~ managed_policy_arns   = [
          - "arn:aws:iam::123456789012:policy/app-staging-elb-sl-role-creation20210526165212000700000001",
          - "arn:aws:iam::aws:policy/AmazonEKSClusterPolicy",
          - "arn:aws:iam::aws:policy/AmazonEKSServicePolicy",
          - "arn:aws:iam::aws:policy/AmazonEKSVPCResourceController",
        ] -> (known after apply)
      ~ name                  = "app-staging20210526165212002100000002" -> "app-staging-cluster" # forces replacement
      ~ name_prefix           = "app-staging" -> (known after apply)
      - tags                  = {} -> null
        # (4 unchanged attributes hidden)
    }

We're interested in line that forces replacement:

~ name = "app-staging20210526165212002100000002" -> "app-staging-cluster" # forces replacement

In order to avoid recreation of this policy we might add legacy_iam_role_name as argument to eks module:

  module "eks" {
  
+   # We can't rename cluster's role name w/o recreation of cluster. The algorithm
+   # of generation role name was changed in eks module and therefor we have to
+   # recreate cluster or specify the legacy iam role name.
+   legacy_iam_role_name = "app-staging20210526165212002100000002"
  }
  1. Make sure that EKS cluster is not being recreated: terraform plan -target module.eks.

  2. Apply changes. WARNING: applying changes will cause downtime when changes for eks module are applied and changes for database modules are due to apply. (You may think about more clever application of changes to avoid downtime but this is up to you.)

It is recommended to apply changes per module to better review and control them:

terraform apply -target module.eks
# downtime here
terraform apply -target module.database
terraform apply -target module.kubernetes
terraform apply

🎉