Google Cloud Kubernetes集群仪表板不显示来自”kubectl config view”的集群。

huangapple go评论62阅读模式
英文:

Google Cloud Kubernetes Clusters dashboard does not show clusters from "kubectl config view"

问题

  1. 在 Google Cloud 仪表板的哪个位置可以看到配置文件中提到的集群(gke_question-tracker_us-central1-c_hello-java-cluster, gke_quizdev_us-central1_autopilot-cluster-1)?

  2. 在 Google Cloud 仪表板的哪个位置可以看到配置文件中提到的用户?

  3. 运行 kubectl config view 命令后,为什么我看不到 questy-java-cluster?

英文:

I am discovering Google Cloud Kubernetes being fairly new to the topic. I have created couple of clusters and later deleted them (that is what I thought).
When I go to the console I see one new cluster:

Google Cloud Kubernetes集群仪表板不显示来自”kubectl config view”的集群。

But when I run the command:

kubectl config view

I see other clusters defined

apiVersion: v1
clusters:
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://34.68.77.89
  name: gke_question-tracker_us-central1-c_hello-java-cluster
- cluster:
    certificate-authority-data: DATA+OMITTED
    server: https://34.135.56.138
  name: gke_quizdev_us-central1_autopilot-cluster-1
contexts:
- context:
    cluster: gke_question-tracker_us-central1-c_hello-java-cluster
    user: gke_question-tracker_us-central1-c_hello-java-cluster
  name: gke_question-tracker_us-central1-c_hello-java-cluster
- context:
    cluster: gke_quizdev_us-central1_autopilot-cluster-1
    user: gke_quizdev_us-central1_autopilot-cluster-1
  name: gke_quizdev_us-central1_autopilot-cluster-1
current-context: gke_quizdev_us-central1_autopilot-cluster-1
kind: Config
preferences: {}
users:
- name: gke_question-tracker_us-central1-c_hello-java-cluster
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args: null
      command: gke-gcloud-auth-plugin
      env: null
      installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
        https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
      interactiveMode: IfAvailable
      provideClusterInfo: true
- name: gke_quizdev_us-central1_autopilot-cluster-1
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args: null
      command: gke-gcloud-auth-plugin
      env: null
      installHint: Install gke-gcloud-auth-plugin for use with kubectl by following
        https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke
      interactiveMode: IfAvailable
      provideClusterInfo: true
  1. Where in the Google Cloud Dashboard I can see the clusters mentioned in the config file (gke_question-tracker_us-central1-c_hello-java-cluster, gke_quizdev_us-central1_autopilot-cluster-1)?

  2. Where in the Google Cloud Dashboard I can see the users mentioned in the config file?

  3. Why I do not see the questy-java-cluster after running the kubectl config view command?

答案1

得分: 1

这是一些有关Google Cloud和Kubernetes集群配置的信息。在Google Cloud中有两个相关但分离的"视图",一个是Google Cloud的视图,另一个是kubectl的视图。你需要保持它们同步,尤其是在创建或删除集群时。使用gcloud container clusters get-credentials来更新本地kubeconfig文件中的凭据信息。注意,kubeconfig文件中的配置信息与集群的实际存在可能会有所不同,需要手动管理。此外,如果多个开发者使用多个主机访问Kubernetes集群,每个主机都需要反映kubeconfig配置以访问所需的每个集群。 Google Kubernetes Engine(GKE)在管理kubeconfig配置方面做得不错,但可能会引发一些复杂性和混淆,因为某些配置是隐式的,而最好是更加透明。如果你使用任何托管的Kubernetes提供商(如AWS、Azure等),它们都会提供类似的过程来帮助管理kubeconfig中的条目。

英文:

This is a tad confusing.

There are 2 related but disconnected "views" of the clusters.

The first view is Google Cloud's "view". This is what you're seeing in Cloud Console. You would see the same (!) details using e.g. gcloud container clusters list --project=quizdev (see docs). This is the current set of Kubernetes clusters resources (there's one cluster questy-java-cluster in the current project (quizdev).

kubectl generally (though you can specify the projects on the command line too) uses a so-called kubeconfig file (default Linux location: ~/.kube/config) to hold the configuration information for clusters, contexts (combine clusters with user and possible more) with users. See Organizing Cluster Access using kubeconfig files.

Now, it's mostly up to you (the developer) to keep the Google Cloud view and the kubectl view in sync.

When you gcloud container clusters create (or use Cloud Console), gcloud creates the cluster (and IIRC) configures the default kubeconfig file for you. This is to make it easier to immediately use kubectl after creating the cluster. You can also always gcloud container clusters get-credentials to repeat the credentials step (configuring kubeconfig).

If you create clusters using Cloud Console, you must gcloud container clusters get-credentials manually in order to update your local kubeconfig file(s) with the cluster's credentials.

I don't recall whether gcloud container clusters delete deletes the corresponding credentials in the default kubeconfig file; I think it doesn't.

The result is that there's usually 'drift' between what the kubeconfig file contains and the clusters that exist; I create|delete clusters daily and periodically tidy my kubeconfig file for this reason.

One additional complication is that (generally) there's one kubeconfig file (~/.kube/config) but you may also have multiple Google Cloud Projects. The clusters that you've get-credentials (either manually or automatically) that span multiple (!) Google Cloud Projects will all be present in the one local kubeconfig.

There's a one-to-one mapping though between Google Cloud Projects, Locations and Cluster Names and kubeconfig cluster's:

gke_{PROJECT}_{LOCATION}_{CLUSTER-NAME}

Lastly, if one (or more developers) use multiple hosts to access Kubernetes clusters, each host will need to reflect the kubeconfig configuration (server, user, context) for each cluster that it needs to access.

GKE does a decent job in helping you manage kubeconfig configurations. The complexity|confusion arises because it does some of this configuration implicitly (gcloud container clusters create) and it would be better to make this more transparent. If you use any managed Kubernetes offering (AWS, Azure, Linode, Vultr etc. etc.), these all provide some analog of this process either manual or automatic for helping manage the entries in kubeconfig.

答案2

得分: 0

Ad 1)
显然,我曾经使用Google Cloud Console手动删除了一些集群,但它们仍然存在于我的kube配置文件中。我遵循了@DazWilkin的建议,整理了我的kube配置文件,删除了所有类似的断开连接的条目:

kubectl config unset users
kubectl config unset clusters
kubectl config unset contexts

Ad 2)
要查看用户,我进入了这里 https://console.cloud.google.com/iam-admin/serviceaccounts?project=quizdev

或者使用以下命令:

gcloud iam service-accounts list

Ad 3)
为了将questy-java-cluster添加到kube配置文件中,我运行了以下命令:

gcloud container clusters get-credentials

这将集群添加到配置文件中。现在我已经使一切同步。

英文:

Ad 1)
Apparently I had some clusters removed manually using Google Cloud Console, but they staid in my kube config file. I followed advice from @DazWilkin and tidied my kube config, removing all disconnected entries like that:

kubectl config unset users
kubectl config unset clusters
kubectl config unset contexts

Ad 2)
In order to see the users I went in here https://console.cloud.google.com/iam-admin/serviceaccounts?project=quizdev:

Google Cloud Kubernetes集群仪表板不显示来自”kubectl config view”的集群。

or calling the command:

gcloud iam service-accounts list

Ad 3) and in order to add the questy-java-cluster to the kube config file I have ran:

gcloud container clusters get-credentials

Which added the cluster to the config file. Now I have everything in sync.

答案3

得分: 0

另一个将 kube 配置与 Google 云控制台同步的解决方案是使用:

gcloud container clusters delete hello-java-cluster --region us-central1-c

它将移除以下内容:

  • Google 托管的集群
  • 此集群的 kubectl 配置条目
英文:

Also another solution to have kube config in sync with google cloud console is to use:

gcloud container clusters delete hello-java-cluster    --region us-central1-c

It will remove both:

  • google managed cluster
  • kubectl config entries for this cluster

huangapple
  • 本文由 发表于 2023年6月12日 19:56:02
  • 转载请务必保留本文链接:https://go.coder-hub.com/76456422-2.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定