英文:
Terraform - what's the correct way to create a EKS cluster and apply manifests to it?
问题
I have the following situation: I want to have a Terraform configuration in which I:
- Create a EKS cluster
- Install some manifests to it using
kubectl_manifest
resource.
So I essence, I have the following configuration:
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.50"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
}
}
provider "aws" { }
provider "kubectl" {
config_path = "~/.kube/config"
config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}
resource "aws_eks_cluster" "my-cluster" {
... the cluster configuration
}
... node group yada yada..
resource "null_resource" "update_kubeconfig" {
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name my-cluster --region us-east-1"
}
}
resource "kubectl_manifest" "some_manifest" {
yaml_body = <some_valid_yaml>
depends_on = [null_resource.update_kubeconfig]
}
So my hope is that after the null_resource.update_kubeconfig
runs, and updates the .kube/config
(which it does; checked), the kubectl_manifest.some_manifest
will pick up on it and use the newly updated configuration. But it doesn't.
I don't have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously - now not existing - clusters in the kubeconfig). Then it throws an error of "can't resolve DNS name" of the old cluster.
So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn't being refreshed when the kubectl_manifest
resource is being created.
What is the right way to handle this?!
英文:
I have the following situation: I want to have a Terraform configuration in which I:
- Create a EKS cluster
- Install some manifests to it using
kubectl_manifest
resource.
So I essence, I have the following configuration:
... all the IAM stuff needed for the cluster
terraform {
required_providers {
aws = {
source = "hashicorp/aws"
version = "~> 4.50"
}
kubectl = {
source = "gavinbunney/kubectl"
version = ">= 1.7.0"
}
}
}
provider "aws" { }
provider "kubectl" {
config_path = "~/.kube/config"
config_context = aws_eks_cluster.this.arn # tried this also, with no luck
}
resource "aws_eks_cluster" "my-cluster" {
... the cluster configuration
}
... node group yada yada..
resource "null_resource" "update_kubeconfig" {
provisioner "local-exec" {
command = "aws eks update-kubeconfig --name my-cluster --region us-east-1
}
}
resource "kubectl_manifest" "some_manifest" {
yaml_body = <some_valid_yaml>
depends_on = [null_resource.update_kubeconfig]
}
So my hope is that after the null_resource.update_kubeconfig
runs, and updates the .kube/config
(which it does; checked), the kubectl_manifest.some_manifest
will pick up on it and use the newly updated configuration. But it doesn't.
I don't have currently the error message in hand, but essentially what happens is: It tries to communicate the previously created cluster (I have previously - now not existing - clusters in the kubeconfig). Then it throws an error of "can't resolve DNS name" of the old cluster.
So it seems that the kubeconfig file is loaded somewhere in the beginning of the run, and it isn't being refreshed when the kubectl_manifest
resource is being created.
What is the right way to handle this?!
答案1
得分: 3
自从您的Terraform代码本身创建了集群,您可以在提供程序配置中引用它,
data "aws_eks_cluster_auth" "main" {
name = aws_eks_cluster.my-cluster.name
}
provider "kubernetes" {
host = aws_eks_cluster.my-cluster.endpoint
token = data.aws_eks_cluster_auth.main.token
cluster_ca_certificate = base64decode(aws_eks_cluster.my-cluster.certificate_authority.0.data)
}
希望这对您有所帮助。
英文:
Since your Terraform code itself creating the Cluster, you can refer that in provider configuration,
data "aws_eks_cluster_auth" "main" {
name = aws_eks_cluster.my-cluster.name
}
provider "kubernetes" {
host = aws_eks_cluster.my-cluster.endpoint
token = data.aws_eks_cluster_auth.main.token
cluster_ca_certificate = base64decode(aws_eks_cluster.my-cluster.certificate_authority.0.data)
}
答案2
得分: 1
aws eks update-kubeconfig
命令会将集群上下文添加到您的kubeconfig文件中,但不会在kubectl
命令中切换到该上下文。您可以通过几种不同的方式解决这个问题。
- 更新您的local-exec命令,包含
; kubectl config use-context my-cluster
。这将在文件更新后更新全局上下文。 - 在local-exec命令中添加
--kubeconfig ~/.kube/my-cluster
选项。任何kubectl
命令应该使用此选项,以保持您的集群配置在一个单独的文件中。 - 更新您的
kubectl
命令,使用--context my-cluster
,这将仅为该命令设置集群上下文。 - 更新您的kubectl提供程序,使用config_context,并将值设置为aws_eks_cluster.this.name。kubeconfig文件中的上下文是基于集群名称而不是AWS ARN。
这些选项中的任何一个都应该可以工作,具体取决于您喜欢如何管理配置。
就我个人而言,我喜欢将我的集群保存在单独的配置文件中,因为它们更容易共享,并且我在不同的shell中导出KUBECONFIG
,而不是不断地切换上下文。
英文:
The aws eks update-kubeconfig
command will add the cluster context to your kubeconfig file but it doesn't switch to that context for kubectl
commands. You can solve this a few different ways.
- Update your local-exec command to include
; kubectl config use-context my-cluster
. This will update the global context after the file has been updated. - Add a
--kubeconfig ~/.kube/my-cluster
option to the local-exec command. Anykubectl
commands should then use this option which keeps your cluster config is a separate file. - Update your
kubectl
commands to use--context my-cluster
which will set the cluster context only for that command. - Update your kubectl provider to use the config_context with the value aws_eks_cluster.this.name. Contexts in the kubeconfig file are based on cluster names and not AWS ARNs.
Any of those options should work and it depends on how you like to manage your config.
Personally, I prefer to keep my clusters in separate config files because they're easier to share and I export KUBECONFIG
in different shells instead of changing contexts back and forth.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论