英文:
Kubespray: Can't run upgrade due to cordon failing on istiod
问题
我正在运行Kubespray版本2.19.0,部署在4台运行Ubuntu 20.04的裸机服务器上。这4台服务器设置为一个主节点和3个工作节点。这是我在家里运行的一个小集群,用于运行我的计算任务和“玩耍”。
我正在尝试运行升级操作,使用以下命令:
ansible-playbook -kK -i inventory/mycluster/hosts.yaml upgrade-cluster.yml --become --become-user=root -e ignore_assert_errors=yes
但是当它尝试关闭主节点(服务器名称为server
)时,我收到以下错误信息:
TASK [upgrade/pre-upgrade : Drain node] ****************************************
fatal: [server -> server]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "drain", "--force", "--ignore-daemonsets", "--grace-period", "300", "--timeout", "360s", "--delete-emptydir-data", "server"], "delta": "0:06:02.042964", "end": "2023-05-28 10:14:07.440240", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2023-05-28 10:08:05.397276", "stderr": "WARNING: ignoring DaemonSet-managed Pods: ingress-nginx/ingress-nginx-controller-rmdtj, kube-system/calico-node-rmdjg, kube-system/kube-proxy-fb6ff, kube-system/nodelocaldns-w5tzq\nerror when evicting pods/\"istiod-6b56cffbd9-thdjl\" -n \"istio-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.\nerror when evicting pods/\"istiod-6b56cffbd9-thdjl\" -n \"istio-system\" (will retry after 5s):...
是否有一些标志可以传递给升级操作,以告诉它忽略分布式预算或完全忽略这些错误?
英文:
I am running Kubespray v.2.19.0 on 4 bare metal servers running Ubuntu 20.04. The 4 servers are setup as one master node and 3 worker nodes. It's a small cluster I run at my house to run my own compute and to "play"
I am trying to run a upgrade using:
ansible-playbook -kK -i inventory/mycluster/hosts.yaml upgrade-cluster.yml --become --become-user=root -e ignore_assert_errors=yes
But when it tries to cordon off the master node (server name is server
) I get the following:
TASK [upgrade/pre-upgrade : Drain node] ****************************************
fatal: [server -> server]: FAILED! => {"attempts": 3, "changed": true, "cmd": ["/usr/local/bin/kubectl", "--kubeconfig", "/etc/kubernetes/admin.conf", "drain", "--force", "--ignore-daemonsets", "--grace-period", "300", "--timeout", "360s", "--delete-emptydir-data", "server"], "delta": "0:06:02.042964", "end": "2023-05-28 10:14:07.440240", "failed_when_result": true, "msg": "non-zero return code", "rc": 1, "start": "2023-05-28 10:08:05.397276", "stderr": "WARNING: ignoring DaemonSet-managed Pods: ingress-nginx/ingress-nginx-controller-rmdtj, kube-system/calico-node-rmdjg, kube-system/kube-proxy-fb6ff, kube-system/nodelocaldns-w5tzq\nerror when evicting pods/\"istiod-6b56cffbd9-thdjl\" -n \"istio-system\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.\nerror when evicting pods/\"istiod-6b56cffbd9-thdjl\" -n \"istio-system\" (will retry after 5s):...
Is there some flag I can pass to the upgrade to tell it to ignore the distribution budget or to ignore those errors altogether?
答案1
得分: 2
不要有别的内容,只返回翻译好的部分:
"Rather than trying to create a workaround I'd suggest to resolve the actual problem.
您不妨尝试解决实际问题,而不是试图创建一种变通方法。
You have two options. You can either increase the replicas to 2. Or you can disable pod disruption budget.
您有两个选择。您可以将副本数量增加到2,或者您可以禁用 Pod 中断预算。
Since you only run a home lab I'd take option two to not waste resources on a second pod that you probably don't need.
因为您只在家庭实验室运行,我建议选择第二个选项,以免浪费资源在可能不需要的第二个 Pod 上。
Not clear how you installed istio, but that's how you do it using an operator manifest:
不清楚您是如何安装 Istio 的,但这是使用操作员清单的方法:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default # your profile
tag: 1.17.2 # your version
values:
global:
defaultPodDisruptionBudget:
enabled: false
Save it to a file like config.yaml
and install it with istioctl install -f config.yaml
.
将其保存到像 config.yaml
这样的文件中,然后使用 istioctl install -f config.yaml
安装。
英文:
Rather than trying to create a workaround I'd suggest to resolve the actual problem.
You have two options. You can either increase the replicas to 2. Or you can disable pod disruption budget.
Since you only run a home lab I'd take option two to not waste resources on a second pod that you probably don't need.
Not clear how you installed istio, but that's how you do it using an operator manifest:
apiVersion: install.istio.io/v1alpha1
kind: IstioOperator
spec:
profile: default # your profile
tag: 1.17.2 # your version
values:
global:
defaultPodDisruptionBudget:
enabled: false
Save it to a file like config.yaml
and install it with istioctl install -f config.yaml
.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论