Terraform -创建EKS集群并应用清单的正确方法是什么?

huangapple go评论60阅读模式
英文:

Terraform - what's the correct way to create a EKS cluster and apply manifests to it?

问题

I have the following situation: I want to have a Terraform configuration in which I:

  1. Create a EKS cluster
  2. Install some manifests to it using kubectl_manifest resource.

So I essence, I have the following configuration:

  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~> 4.50"
    }

    kubectl = {
      source  = "gavinbunney/kubectl"
      version = ">= 1.7.0"
    }
  }
}

provider "aws" { }

provider "kubectl" {
    config_path = "~/.kube/config"
    config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}

resource "aws_eks_cluster" "my-cluster" {
... the cluster configuration
}
... node group yada yada..

resource "null_resource" "update_kubeconfig" {
   provisioner "local-exec" {
      command = "aws eks update-kubeconfig --name my-cluster --region us-east-1"
   }
}

resource "kubectl_manifest" "some_manifest" {
   yaml_body = <some_valid_yaml>

   depends_on = [null_resource.update_kubeconfig]
}

So my hope is that after the null_resource.update_kubeconfig runs, and updates the .kube/config (which it does; checked), the kubectl_manifest.some_manifest will pick up on it and use the newly updated configuration. But it doesn't.

I don't have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously - now not existing - clusters in the kubeconfig). Then it throws an error of "can't resolve DNS name" of the old cluster.

So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn't being refreshed when the kubectl_manifest resource is being created.

What is the right way to handle this?!

英文:

I have the following situation: I want to have a Terraform configuration in which I:

  1. Create a EKS cluster
  2. Install some manifests to it using kubectl_manifest resource.

So I essence, I have the following configuration:

... all the IAM stuff needed for the cluster

terraform {
  required_providers {
    aws = {
      source  = &quot;hashicorp/aws&quot;
      version = &quot;~&gt; 4.50&quot;
    }

    kubectl = {
      source  = &quot;gavinbunney/kubectl&quot;
      version = &quot;&gt;= 1.7.0&quot;
    }


  }
}

provider &quot;aws&quot; { }

provider &quot;kubectl&quot; {
    config_path = &quot;~/.kube/config&quot;
    config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}


resource &quot;aws_eks_cluster&quot; &quot;my-cluster&quot; {
... the cluster configuration
}
... node group yada yada..

resource &quot;null_resource&quot; &quot;update_kubeconfig&quot; {
   provisioner &quot;local-exec&quot; {
      command = &quot;aws eks update-kubeconfig --name my-cluster --region us-east-1
   }
}

resource &quot;kubectl_manifest&quot; &quot;some_manifest&quot; {
   yaml_body = &lt;some_valid_yaml&gt;

   depends_on = [null_resource.update_kubeconfig]
}

So my hope is that after the null_resource.update_kubeconfig runs, and updates the .kube/config (which it does; checked), the kubectl_manifest.some_manifest will pick up on it and use the newly updated configuration. But it doesn't.

I don't have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously - now not existing - clusters in the kubeconfig). Then it throws an error of "can't resolve DNS name" of the old cluster.

So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn't being refreshed when the kubectl_manifest resource is being created.

What is the right way to handle this?!

答案1

得分: 3

自从您的Terraform代码本身创建了集群,您可以在提供程序配置中引用它,

data "aws_eks_cluster_auth" "main" {
  name = aws_eks_cluster.my-cluster.name
}

provider "kubernetes" {
  host = aws_eks_cluster.my-cluster.endpoint

  token                  = data.aws_eks_cluster_auth.main.token
  cluster_ca_certificate = base64decode(aws_eks_cluster.my-cluster.certificate_authority.0.data)
}

希望这对您有所帮助。

英文:

Since your Terraform code itself creating the Cluster, you can refer that in provider configuration,

data &quot;aws_eks_cluster_auth&quot; &quot;main&quot; {
  name = aws_eks_cluster.my-cluster.name
}

provider &quot;kubernetes&quot; {
  host = aws_eks_cluster.my-cluster.endpoint

  token                  = data.aws_eks_cluster_auth.main.token
  cluster_ca_certificate = base64decode(aws_eks_cluster.my-cluster.certificate_authority.0.data)
}

答案2

得分: 1

aws eks update-kubeconfig命令会将集群上下文添加到您的kubeconfig文件中,但不会在kubectl命令中切换到该上下文。您可以通过几种不同的方式解决这个问题。

  1. 更新您的local-exec命令,包含; kubectl config use-context my-cluster。这将在文件更新后更新全局上下文。
  2. 在local-exec命令中添加--kubeconfig ~/.kube/my-cluster选项。任何kubectl命令应该使用此选项,以保持您的集群配置在一个单独的文件中。
  3. 更新您的kubectl命令,使用--context my-cluster,这将仅为该命令设置集群上下文。
  4. 更新您的kubectl提供程序,使用config_context,并将值设置为aws_eks_cluster.this.name。kubeconfig文件中的上下文是基于集群名称而不是AWS ARN。

这些选项中的任何一个都应该可以工作,具体取决于您喜欢如何管理配置。

就我个人而言,我喜欢将我的集群保存在单独的配置文件中,因为它们更容易共享,并且我在不同的shell中导出KUBECONFIG,而不是不断地切换上下文。

英文:

The aws eks update-kubeconfig command will add the cluster context to your kubeconfig file but it doesn't switch to that context for kubectl commands. You can solve this a few different ways.

  1. Update your local-exec command to include ; kubectl config use-context my-cluster. This will update the global context after the file has been updated.
  2. Add a --kubeconfig ~/.kube/my-cluster option to the local-exec command. Any kubectl commands should then use this option which keeps your cluster config is a separate file.
  3. Update your kubectl commands to use --context my-cluster which will set the cluster context only for that command.
  4. Update your kubectl provider to use the config_context with the value aws_eks_cluster.this.name. Contexts in the kubeconfig file are based on cluster names and not AWS ARNs.

Any of those options should work and it depends on how you like to manage your config.

Personally, I prefer to keep my clusters in separate config files because they're easier to share and I export KUBECONFIG in different shells instead of changing contexts back and forth.

huangapple
  • 本文由 发表于 2023年5月23日 00:08:52
  • 转载请务必保留本文链接:https://go.coder-hub.com/76308065.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定