Terraform与AWS和Kubernetes提供者的循环

huangapple go评论89阅读模式
英文:

Terraform cycle with AWS and Kubernetes provider

问题

我的Terraform代码描述了一些用于构建Kubernetes集群的AWS基础架构,包括将一些部署放入集群中。当我尝试使用terraform plan -destroy销毁基础架构时,出现了循环依赖的问题:

module.eks_control_plane.aws_eks_cluster.this[0] (destroy)
module.eks_control_plane.output.cluster
provider.kubernetes
module.aws_auth.kubernetes_config_map.this[0] (destroy)
data.aws_eks_cluster_auth.this[0] (destroy)

通过手动使用terraform destroy销毁基础架构可以正常工作。不幸的是,Terraform Cloud使用terraform plan -destroy首先计划销毁操作,这会导致失败。以下是相关代码:

来自eks_control_plane模块的代码片段:

resource "aws_eks_cluster" "this" {
  count = var.enabled ? 1 : 0

  name     = var.cluster_name
  role_arn = aws_iam_role.control_plane[0].arn
  version  = var.k8s_version

  # https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
  enabled_cluster_log_types = var.control_plane_log_enabled ? var.control_plane_log_types : []

  vpc_config {
    security_group_ids = [aws_security_group.control_plane[0].id]
    subnet_ids         = [for subnet in var.control_plane_subnets : subnet.id]
  }

  tags = merge(var.tags,
    {
    }
  )

  depends_on = [
    var.dependencies,
    aws_security_group.node,
    aws_iam_role_policy_attachment.control_plane_cluster_policy,
    aws_iam_role_policy_attachment.control_plane_service_policy,
    aws_iam_role_policy.eks_cluster_ingress_loadbalancer_creation,
  ]
}

output "cluster" {
  value = length(aws_eks_cluster.this) > 0 ? aws_eks_cluster.this[0] : null
}

来自aws_auth模块的aws-auth Kubernetes配置映射:

resource "kubernetes_config_map" "this" {
  count = var.enabled ? 1 : 0

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }
  data = {
    mapRoles = jsonencode(
      concat(
        [
          {
            rolearn  = var.node_iam_role.arn
            username = "system:node:{{EC2PrivateDNSName}}"
            groups = [
              "system:bootstrappers",
              "system:nodes",
            ]
          }
        ],
        var.map_roles
      )
    )
  }

  depends_on = [
    var.dependencies,
  ]
}

根模块的Kubernetes提供程序:

data "aws_eks_cluster_auth" "this" {
  count = module.eks_control_plane.cluster != null ? 1 : 0
  name  = module.eks_control_plane.cluster.name
}

provider "kubernetes" {
  version = "~> 1.10"

  load_config_file       = false
  host                   = module.eks_control_plane.cluster != null ? module.eks_control_plane.cluster.endpoint : null
  cluster_ca_certificate = module.eks_control_plane.cluster != null ? base64decode(module.eks_control_plane.cluster.certificate_authority[0].data) : null
  token                  = length(data.aws_eks_cluster_auth.this) > 0 ? data.aws_eks_cluster_auth.this[0].token : null
}

这是如何调用模块的方式:

module "eks_control_plane" {
  source  = "app.terraform.io/SDA-SE/eks-control-plane/aws"
  version = "0.0.1"
  enabled = local.k8s_enabled

  cluster_name          = var.name
  control_plane_subnets = module.vpc.private_subnets
  k8s_version           = var.k8s_version
  node_subnets          = module.vpc.private_subnets
  tags                  = var.tags
  vpc                   = module.vpc.vpc

  dependencies = concat(var.dependencies, [
    # Ensure that VPC including all security group rules, network ACL rules,
    # routing table entries, etc. is fully created
    module.vpc,
  ])
}

module "aws_auth" {
  source  = "app.terraform.io/SDA-SE/aws-auth/kubernetes"
  version = "0.0.0"
  enabled = local.aws_auth_enabled

  node_iam_role = module.eks_control_plane.node_iam_role
  map_roles = [
    {
      rolearn  = "arn:aws:iam::${var.aws_account_id}:role/Administrator"
      username = "admin"
      groups = [
        "system:masters",
      ]
    },
    {
      rolearn  = "arn:aws:iam::${var.aws_account_id}:role/Terraform"
      username = "terraform"
      groups = [
        "system:masters",
      ]
    }
  ]
}

删除aws_auth配置映射,即不再使用Kubernetes提供程序,会解决循环依赖的问题。显然问题在于Terraform试图销毁Kubernetes集群,而Kubernetes提供程序需要它。手动逐步删除资源,使用多个terraform apply步骤也可以正常工作。

是否有一种方法可以告诉Terraform首先销毁所有Kubernetes资源,以便不再需要提供程序,然后再销毁EKS集群?

英文:

My Terraform code describes some AWS infrastructure to build a Kubernetes cluster including some deployments into the cluster. When I try to destroy the infrastructure using terraform plan -destroy I get a cycle:

module.eks_control_plane.aws_eks_cluster.this[0] (destroy)
module.eks_control_plane.output.cluster
provider.kubernetes
module.aws_auth.kubernetes_config_map.this[0] (destroy)
data.aws_eks_cluster_auth.this[0] (destroy)

Destroying the infrastructure works by hand using just terraform destroy works fine. Unfortunately, Terraform Cloud uses terraform plan -destroy to plan the destructuion first, which causes this to fail. Here is the relevant code:

excerpt from eks_control_plane module:

resource "aws_eks_cluster" "this" {
  count = var.enabled ? 1 : 0

  name     = var.cluster_name
  role_arn = aws_iam_role.control_plane[0].arn
  version  = var.k8s_version

  # https://docs.aws.amazon.com/eks/latest/userguide/control-plane-logs.html
  enabled_cluster_log_types = var.control_plane_log_enabled ? var.control_plane_log_types : []

  vpc_config {
    security_group_ids = [aws_security_group.control_plane[0].id]
    subnet_ids         = [for subnet in var.control_plane_subnets : subnet.id]
  }

  tags = merge(var.tags,
    {
    }
  )

  depends_on = [
    var.dependencies,
    aws_security_group.node,
    aws_iam_role_policy_attachment.control_plane_cluster_policy,
    aws_iam_role_policy_attachment.control_plane_service_policy,
    aws_iam_role_policy.eks_cluster_ingress_loadbalancer_creation,
  ]
}

output "cluster" {
  value = length(aws_eks_cluster.this) > 0 ? aws_eks_cluster.this[0] : null
}

aws-auth Kubernetes config map from aws_auth module:

resource "kubernetes_config_map" "this" {
  count = var.enabled ? 1 : 0

  metadata {
    name      = "aws-auth"
    namespace = "kube-system"
  }
  data = {
    mapRoles = jsonencode(
      concat(
        [
          {
            rolearn  = var.node_iam_role.arn
            username = "system:node:{{EC2PrivateDNSName}}"
            groups = [
              "system:bootstrappers",
              "system:nodes",
            ]
          }
        ],
        var.map_roles
      )
    )
  }

  depends_on = [
    var.dependencies,
  ]
}

Kubernetes provider from root module:

data "aws_eks_cluster_auth" "this" {
  count = module.eks_control_plane.cluster != null ? 1 : 0
  name  = module.eks_control_plane.cluster.name
}

provider "kubernetes" {
  version = "~> 1.10"

  load_config_file       = false
  host                   = module.eks_control_plane.cluster != null ? module.eks_control_plane.cluster.endpoint : null
  cluster_ca_certificate = module.eks_control_plane.cluster != null ? base64decode(module.eks_control_plane.cluster.certificate_authority[0].data) : null
  token                  = length(data.aws_eks_cluster_auth.this) > 0 ? data.aws_eks_cluster_auth.this[0].token : null
}

And this is how the modules are called:

module "eks_control_plane" {
  source  = "app.terraform.io/SDA-SE/eks-control-plane/aws"
  version = "0.0.1"
  enabled = local.k8s_enabled

  cluster_name          = var.name
  control_plane_subnets = module.vpc.private_subnets
  k8s_version           = var.k8s_version
  node_subnets          = module.vpc.private_subnets
  tags                  = var.tags
  vpc                   = module.vpc.vpc

  dependencies = concat(var.dependencies, [
    # Ensure that VPC including all security group rules, network ACL rules,
    # routing table entries, etc. is fully created
    module.vpc,
  ])
}


# aws-auth config map module. Creating this config map will allow nodes and
# Other users to join the cluster.
# CNI and CSI plugins must be set up before creating this config map.
# Enable or disable this via `aws_auth_enabled` variable.
# TODO: Add Developer and other roles.
module "aws_auth" {
  source  = "app.terraform.io/SDA-SE/aws-auth/kubernetes"
  version = "0.0.0"
  enabled = local.aws_auth_enabled

  node_iam_role = module.eks_control_plane.node_iam_role
  map_roles = [
    {
      rolearn  = "arn:aws:iam::${var.aws_account_id}:role/Administrator"
      username = "admin"
      groups = [
        "system:masters",
      ]
    },
    {
      rolearn  = "arn:aws:iam::${var.aws_account_id}:role/Terraform"
      username = "terraform"
      groups = [
        "system:masters",
      ]
    }
  ]
}

Removing the aws_auth config map, which means not using the Kubernetes provider at all, breaks the cycle. The problem is obviously that Terraform tries to destroys the Kubernetes cluster, which is required for the Kubernetes provider. Manually removing the resources step by step using multiple terraform apply steps works fine, too.

Is there a way that I can tell Terraform first to destroy all Kubernetes resources so that the Provider is not required anymore, then destroy the EKS cluster?

答案1

得分: 1

你可以使用depends_on元参数来控制销毁顺序,就像你在一些Terraform代码中所做的那样。

如果你将depends_on参数添加到所有需要首先销毁的必需资源,并使其依赖于eks-cluster,Terraform会在销毁集群之前销毁这些资源。

你还可以使用terraform graph命令可视化你的配置和依赖关系,以帮助你决定需要创建哪些依赖关系。

https://www.terraform.io/docs/cli/commands/graph.html

英文:

You can control the order of destruction using the depends_on meta-argument, like you did with some of your Terraform code.

If you add the depends_on argument to all of the required resources that are needing to be destroyed first and have it depend on the eks-cluster Terraform will destroy those resources before the cluster.

You can also visualize your configuration and dependency with the terraform graph command to help you make decisions on what dependencies need to be created.

https://www.terraform.io/docs/cli/commands/graph.html

huangapple
  • 本文由 发表于 2020年1月3日 13:30:13
  • 转载请务必保留本文链接:https://go.coder-hub.com/59573616.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定