创建一个使用 Pod 安全准入的集群。

huangapple go评论145阅读模式
英文:

Create a cluster that uses Pod Security Admission

问题

I try to configure the API server to consume this file during cluster creation.
我尝试配置API服务器以在集群创建期间使用此文件。
My system is Ubuntu 22.04.2 LTS x86_64
我的系统是Ubuntu 22.04.2 LTS x86_64

kind version is v0.17.0 go1.19.2 linux/amd64
kind版本是v0.17.0 go1.19.2 linux/amd64

minikube version: v1.29.0
minikube版本:v1.29.0

The config file is:
配置文件如下:

kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:

  • role: control-plane
    kubeadmConfigPatches:

    • |
      kind: ClusterConfiguration
      apiServer:
      extraArgs:
      admission-control-config-file: /etc/config/cluster-level-pss.yaml
      extraVolumes:
      - name: accf
      hostPath: /etc/config
      mountPath: /etc/config
      readOnly: false
      pathType: "DirectoryOrCreate"
      extraMounts:
    • hostPath: /tmp/pss
      containerPath: /etc/config

      optional: if set, the mount is read-only.

      default false

      readOnly: false

      optional: if set, the mount needs SELinux relabeling.

      default false

      selinuxRelabel: false

      optional: set propagation mode (None, HostToContainer or Bidirectional)

      see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation

      default None

当我运行:
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml --retain

I get (the last step crushes after a long time):
我得到以下输出(最后一步在很长时间后失败):

Creating cluster "psa-with-cluster-pss" ...
创建集群 "psa-with-cluster-pss"...
✓ Ensuring node image (kindest/node:v1.24.0) 🖼
✓ Preparing nodes 📦
✓ Writing configuration
Starting control-pane
启动控制平面

traceback:
堆栈跟踪:

0226 15:54:45.575727 123 round_trippers.go:553] GET https://psa-with-cluster-pss-control-plane:6443/healthz?timeout=10s in 0 milliseconds
0226 15:54:45.575727 123 round_trippers.go:553] GET https://psa-with-cluster-pss-control-plane:6443/healthz?timeout=10s in 0 milliseconds

Unfortunately, an error has occurred:
很不幸,发生了错误:

timed out waiting for the condition
等待条件时超时

This error is likely caused by:
这个错误可能是由以下原因引起的:

  • The kubelet is not running
  • kubelet未运行
  • The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  • kubelet由于节点的某种方式的错误配置而不健康(需要禁用cgroups)

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
如果您使用systemd系统,您可以尝试使用以下命令来排除错误:

  • 'systemctl status kubelet'
  • 'systemctl status kubelet'
  • 'journalctl -xeu kubelet'
  • 'journalctl -xeu kubelet'

Additionally, a control plane component may have crashed or exited when started by the container runtime.
此外,控制平面组件可能在容器运行时启动时崩溃或退出。
To troubleshoot, list all containers using your preferred container runtimes CLI.
要进行故障排除,请使用首选的容器运行时CLI列出所有容器。
Here is one example how you may list all running Kubernetes containers by using crictl:
以下是一个示例,您可以使用crictl来列出所有正在运行的Kubernetes容器:

  • 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
  • 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
    Once you have found the failing container, you can inspect its logs with:
    找到故障容器后,您可以使用以下命令检查其日志:
  • 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
  • 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
    couldn't initialize a Kubernetes cluster
    无法初始化Kubernetes集群
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
    cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
    cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
    cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
    k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
    cmd/kubeadm/app/cmd/init.go:153
    cmd/kubeadm/app/cmd/init.go:153
    k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
    k8s.io/kubernetes/vendor/github
英文:

I try to configure the API server to consume this file during cluster creation.
My system is Ubuntu 22.04.2 LTS x86_64

kind version is v0.17.0 go1.19.2 linux/amd64

minikube version: v1.29.0

The config file is:

  1. kind: Cluster
  2. apiVersion: kind.x-k8s.io/v1alpha4
  3. nodes:
  4. - role: control-plane
  5. kubeadmConfigPatches:
  6. - |
  7. kind: ClusterConfiguration
  8. apiServer:
  9. extraArgs:
  10. admission-control-config-file: /etc/config/cluster-level-pss.yaml
  11. extraVolumes:
  12. - name: accf
  13. hostPath: /etc/config
  14. mountPath: /etc/config
  15. readOnly: false
  16. pathType: "DirectoryOrCreate"
  17. extraMounts:
  18. - hostPath: /tmp/pss
  19. containerPath: /etc/config
  20. # optional: if set, the mount is read-only.
  21. # default false
  22. readOnly: false
  23. # optional: if set, the mount needs SELinux relabeling.
  24. # default false
  25. selinuxRelabel: false
  26. # optional: set propagation mode (None, HostToContainer or Bidirectional)
  27. # see https://kubernetes.io/docs/concepts/storage/volumes/#mount-propagation
  28. # default None
  29. propagation: None

When I run:
kind create cluster --name psa-with-cluster-pss --image kindest/node:v1.24.0 --config /tmp/pss/cluster-config.yaml --retain

I get (the last step crushes after a long time):

  1. Creating cluster "psa-with-cluster-pss" ...
  2. Ensuring node image (kindest/node:v1.24.0) 🖼
  3. Preparing nodes 📦
  4. Writing configuration
  5. Starting control-pane

traceback:

  1. 0226 15:54:45.575727 123 round_trippers.go:553] GET https://psa-with-cluster-pss-control-plane:6443/healthz?timeout=10s in 0 milliseconds
  2. Unfortunately, an error has occurred:
  3. timed out waiting for the condition
  4. This error is likely caused by:
  5. - The kubelet is not running
  6. - The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
  7. If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
  8. - 'systemctl status kubelet'
  9. - 'journalctl -xeu kubelet'
  10. Additionally, a control plane component may have crashed or exited when started by the container runtime.
  11. To troubleshoot, list all containers using your preferred container runtimes CLI.
  12. Here is one example how you may list all running Kubernetes containers by using crictl:
  13. - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock ps -a | grep kube | grep -v pause'
  14. Once you have found the failing container, you can inspect its logs with:
  15. - 'crictl --runtime-endpoint unix:///run/containerd/containerd.sock logs CONTAINERID'
  16. couldn't initialize a Kubernetes cluster
  17. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/init.runWaitControlPlanePhase
  18. cmd/kubeadm/app/cmd/phases/init/waitcontrolplane.go:108
  19. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
  20. cmd/kubeadm/app/cmd/phases/workflow/runner.go:234
  21. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
  22. cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
  23. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
  24. cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
  25. k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
  26. cmd/kubeadm/app/cmd/init.go:153
  27. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
  28. vendor/github.com/spf13/cobra/command.go:856
  29. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
  30. vendor/github.com/spf13/cobra/command.go:974
  31. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
  32. vendor/github.com/spf13/cobra/command.go:902
  33. k8s.io/kubernetes/cmd/kubeadm/app.Run
  34. cmd/kubeadm/app/kubeadm.go:50
  35. main.main
  36. cmd/kubeadm/kubeadm.go:25
  37. runtime.main
  38. /usr/local/go/src/runtime/proc.go:250
  39. runtime.goexit
  40. /usr/local/go/src/runtime/asm_amd64.s:1571
  41. error execution phase wait-control-plane
  42. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run.func1
  43. cmd/kubeadm/app/cmd/phases/workflow/runner.go:235
  44. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).visitAll
  45. cmd/kubeadm/app/cmd/phases/workflow/runner.go:421
  46. k8s.io/kubernetes/cmd/kubeadm/app/cmd/phases/workflow.(*Runner).Run
  47. cmd/kubeadm/app/cmd/phases/workflow/runner.go:207
  48. k8s.io/kubernetes/cmd/kubeadm/app/cmd.newCmdInit.func1
  49. cmd/kubeadm/app/cmd/init.go:153
  50. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).execute
  51. vendor/github.com/spf13/cobra/command.go:856
  52. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).ExecuteC
  53. vendor/github.com/spf13/cobra/command.go:974
  54. k8s.io/kubernetes/vendor/github.com/spf13/cobra.(*Command).Execute
  55. vendor/github.com/spf13/cobra/command.go:902
  56. k8s.io/kubernetes/cmd/kubeadm/app.Run
  57. cmd/kubeadm/app/kubeadm.go:50
  58. main.main
  59. cmd/kubeadm/kubeadm.go:25
  60. runtime.main
  61. /usr/local/go/src/runtime/proc.go:250
  62. runtime.goexit
  63. /usr/local/go/src/runtime/asm_amd64.s:1571

答案1

得分: 1

尝试移除并重新安装 Dockerdocker-ceCNI。在 kubelet 安装过程中,您必须配置 Docker 容器。

出现错误消息是因为您错过了一些步骤,这些步骤未在文档中提到。请参考 容器运行时官方文档 以获取更多信息。检查您可能需要执行以下操作:kubeadm reset,然后使用固定的IP,然后运行 kubeadm init

  1. sudo kubeadm reset
  2. sudo apt-get install -qy kubelet kubectl kubeadm
  3. sudo apt-mark hold kubelet kubeadm kubectl
  4. sudo mkdir /etc/docker
  5. cat <<EOF | sudo tee /etc/docker/daemon.json
  6. {
  7. "exec-opts": ["native.cgroupdriver=systemd"],
  8. "log-driver": "json-file",
  9. "log-opts": {
  10. "max-size": "100m"
  11. },
  12. "storage-driver": "overlay2"
  13. }
  14. EOF
  15. sudo systemctl enable docker
  16. sudo systemctl daemon-reload
  17. sudo systemctl restart docker
  18. sudo kubeadm init --control-plane-endpoint kube-master:6443 --pod-network-cidr=192.168.0.0/16

如果您在一个采用 systemd 的系统上,您可以尝试使用以下命令来排除错误:- 'systemctl status kubelet'- 'journalctl -xeu kubelet'

更多信息,请参考 Kubeadm init fails with controlPlaneEndpoint

还请参考 Kind已知问题Troubleshooting kind,并检查 Failure to Create Cluster with Docker Desktop as Container Runtime 以获取更多信息。

英文:

Try removing and reinstalling Docker, docker-ce and CNI. In the procedure of kubelet installation you must configure the docker container.

The error message is because you missed a few steps which are not mentioned in the document procedure. Please go through the procedure for the container runtime official document for more information. Check you may have to reset such as: kubeadm reset then use a permanent IP and then run kubeadm init.

  1. sudo kubeadm reset
  2. sudo apt-get install -qy kubelet kubectl kubeadm
  3. sudo apt-mark hold kubelet kubeadm kubectl
  4. sudo mkdir /etc/docker
  5. cat &lt;&lt;EOF | sudo tee /etc/docker/daemon.json
  6. {
  7. &quot;exec-opts&quot;: [&quot;native.cgroupdriver=systemd&quot;],
  8. &quot;log-driver&quot;: &quot;json-file&quot;,
  9. &quot;log-opts&quot;: {
  10. &quot;max-size&quot;: &quot;100m&quot;
  11. },
  12. &quot;storage-driver&quot;: &quot;overlay2&quot;
  13. }
  14. EOF
  15. sudo systemctl enable docker
  16. sudo systemctl daemon-reload
  17. sudo systemctl restart docker
  18. sudo kubeadm init --control-plane-endpoint kube-master:6443 --pod-network-cidr=192.168.0.0/16

If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands: -&#39;systemctl status kubelet&#39;

-&#39;journalctl -xeu kubelet&#39;

Refer to Kubeadm init fails with controlPlaneEndpoint for more information

Also refer to Kind Known Issues: Troubleshooting kind and also check if Failure to Create Cluster with Docker Desktop as Container Runtime for more information.

huangapple
  • 本文由 发表于 2023年2月26日 23:57:33
  • 转载请务必保留本文链接:https://go.coder-hub.com/75573204.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定