Skip to content

v0.2.0

Compare
Choose a tag to compare
@dzhlobo dzhlobo released this 06 Jun 21:30
· 33 commits to main since this release

k8s/basic

A number of changes were added in order to make it possible to setup a cluster step by step and to support clusters managed by AWS EKS.

  • 177a264 and bd28e49
    dcr_credentials, ingresses, secrets are optional variables now.
  • 12e4151
    Use disable_tls in ingress configuration in order to disable TLS and certificate acquiring for particular ingress.
  • 86e8fa2
    Use nginx_ingress_helm_chart_options to pass options for nginx-ingress helm chart.
  • c6e6653
    New output variable host was added. It contains the host of load balancer of nginx-ingress deployment.
  • 05f3992
    It is possible to create application namespace outside of kubernetes module and pass it as variable:
module "kubernetes" {
  create_app_namespace = false
  app_namespace = module.eks.app_namespace
}

aws/eks (new)

You can create kubernetes cluster managed by AWS EKS:

module "eks" {
  source = "[email protected]:datarockets/infrastructure.git//aws/eks?ref=v0.2.0"

  cluster_version = "1.20"

  app = var.app
  environment = var.environment
  azs = ["${var.region}a", "${var.region}b"]

  masters_aws_groups = ["Ops"] # "Ops" is the AWS IAM group. Every user from this group will be added to "system:masters" kubernetes group

  ecr_repositories = ["api", "app"] # AWS ECR repositories
}

Output variables:

  • app_namespace - a namespace created for the app. We need to create it in the aws/eks module since we add kubernetes role for "cicd" which is able to manage deployments.
  • vpc_id - EKS cluster is placed in newly created VPC and id of that VPC is returned
  • cluster_id - a name of the cluster, equals to "${var.app}-${var.environment}". You can use it later in data aws_eks_cluster_auth in order to pull authentication token for kubernetes provider.
  • private_cidr_blocks and public_cidr_blocks - EKS cluster VPC has 2 private and 2 public subnets. Private subnets have access to internet via NAT gateways placed in public subnets. Public subnets have Internet Gateway for accessing the internet. You or kubernetes controllers may put pods to either subnet group.
  • ecr_repository_urls - a map of container registry repository names to repository urls.
  • cicd_key_id - AWS_ACCESS_KEY_ID of newly created cicd user. cicd user has policies allowing it to change deployments in the app namespace. It is recommended to use cicd's user credentials on CI/CD server.
  • cicd_key_secret - AWS_SECRET_ACCESS_KEY of cicd user.

aws/postgresql (new)

You can create single instance of RDS PostgreSQL database:

module "database" {
  source = "[email protected]:datarockets/infrastructure.git//aws/postgresql?ref=v0.2.0"

  eks = {
    cluster_name = "${var.app}-${var.environment}"
  }

  app = var.app
  environment = var.environment
  region = var.region
  vpc_id = module.eks.vpc_id
  eks_private_subnets_cidr_blocks = module.eks.private_cidr_blocks # database instance will be placed in private subnets
  database_subnets = {
    "10.0.21.0/24" = "ca-central-1a" # subnet cidr blocks and availability zones for subnets database instance will be placed to
    "10.0.22.0/24" = "ca-central-1b"
  }
}

A new security group is added that allows pods from EKS private subnets to access the database server.

Output variables:

  • database - an object containing host, port, username, database keys. A new regular user is created for the app. This user will have full access only to a single newly created database.
  • database_password

aws/redis (new)

You can create a single Elasticache Redis instance:

module "redis" {
  source = "[email protected]:datarockets/infrastructure.git//aws/redis?ref=v0.2.0"

  app = var.app
  environment = var.environment

  vpc_id = module.eks.vpc_id # Redis instance will be placed in particular VPC and particular private subnets
  eks_private_subnets_cidr_blocks = module.eks.private_cidr_blocks
  redis_subnets = {
    "10.0.31.0/24" = "ca-central-1a"
    "10.0.32.0/24" = "ca-central-1b"
  }
}

A new security group is added that allows pods from EKS private subnets to access the Redis server.

Migration from 0.1.0

Migrating namespace:

It is possible now to create namespace outside of k8s/basic module. We changed the place of kubernetes_namespace resources so you have to remove the previous record from the state and import it to the new place:

terraform state mv module.kubernetes.kubernetes_namespace.application 'module.kubernetes.kubernetes_namespace.app[0]'

Migrating other resources:

terraform state mv module.kubernetes.kubernetes_namespace.cert-manager module.kubernetes.module.dependencies.kubernetes_namespace.cert-manager

terraform state mv module.kubernetes.helm_release.cert-manager module.kubernetes.module.dependencies.helm_release.cert-manager

terraform state mv module.kubernetes.helm_release.nginx-ingress module.kubernetes.module.dependencies.helm_release.nginx-ingress

terraform state mv module.kubernetes.kubernetes_secret.docker-config 'module.kubernetes.module.cluster.kubernetes_secret.docker-config["default"]'

terraform state mv module.kubernetes.kubernetes_manifest.cert-issuer-letsencrypt module.kubernetes.module.cluster.kubernetes_manifest.cert-issuer-letsencrypt

terraform state mv 'module.kubernetes.kubernetes_deployment.deployment' 'module.kubernetes.module.cluster.kubernetes_deployment.deployment'

terraform state mv 'module.kubernetes.kubernetes_service.service' 'module.kubernetes.module.cluster.kubernetes_service.service'

terraform state mv 'module.kubernetes.kubernetes_secret.secret' 'module.kubernetes.module.cluster.kubernetes_secret.secret'

terraform state mv 'module.kubernetes.kubernetes_service_account.service_account' 'module.kubernetes.module.cluster.kubernetes_service_account.service_account'

Migrating ingresses:

Run terraform state list | grep module.kubernetes.kubernetes_ingress. For ingress run:

terraform state mv 'module.kubernetes.kubernetes_ingress.ingress["<ingress_name>"]' 'module.kubernetes.module.ingress["<ingress_name>"].kubernetes_ingress.ingress'