最佳实践是均匀地将 Pod 分散在节点之间。

huangapple go评论57阅读模式
英文:

Best practice spread pod across the node evenly

问题

I have 4 node with different zone:

Node A : zone a
Node B : zone b
Node C : zone c
Node D : zone c

我有4个不同区域的节点:

节点A:区域a
节点B:区域b
节点C:区域c
节点D:区域c

I want to spread the pod to Node A, B and C. I have Deployment that have 3 replicas to spread across those node, each pod each node.
我想将Pod分布到节点A、B和C。我有一个部署,其中有3个副本要分布到这些节点,每个Pod一个节点。

My deployments using kustomization and ArgoCD to deploy. Using the topologySpreadConstraint need to be update the label but labels are immutable on this case.
我的部署使用kustomization和ArgoCD进行部署。使用topologySpreadConstraint需要更新标签,但在这种情况下标签是不可变的。

Current deployment condition using this:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-apps
spec:
  replicas: 3
  revisionHistoryLimit: 0
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: app
                operator: In
                values:
                - my-apps
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: my-apps
              version: v1
...

当前的部署条件使用以下配置:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-apps
spec:
  replicas: 3
  revisionHistoryLimit: 0
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: app
                operator: In
                values:
                - my-apps
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: my-apps
              version: v1
...

I've done add label for those 3 nodes and this configuration works well at first time. But when it comes to update the deployment and rolling update, The pods on nodes going to imbalance.

zone a : 2 pod
zone b : 1 pod
zone c : 0 pod

我已经为这3个节点添加了标签,这个配置一开始运行得很好。但是当涉及更新部署和滚动更新时,节点上的Pod会变得不平衡。

区域a:2个Pod
区域b:1个Pod
区域c:0个Pod

I've done playing with podAntiAffinity but its return as pending if I use hard affinity and still imbalance if I use soft affinity. Any suggestion best practice for this case? did I missed something?
我尝试过使用podAntiAffinity,但如果我使用硬亲和力它会返回为挂起,如果我使用软亲和力仍然会不平衡。对于这种情况,是否有任何最佳实践建议?我是否遗漏了什么?

英文:

I have 4 node with different zone:

Node A : zone a
Node B : zone b
Node C : zone c
Node D : zone c

I want to spread the pod to Node A, B and C. I have Deployment that have 3 replicas to spread across those node, each pod each node. My deployments using kustomization and ArgoCD to deploy. Using the topologySpreadConstraint need to be update the label but labels are immutable on this case.

Current deployment condition using this

apiVersion: apps/v1
kind: Deployment
metadata:
  name: my-apps
spec:
  replicas: 3
  revisionHistoryLimit: 0
  template:
    spec:
      affinity:
        nodeAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
            nodeSelectorTerms:
            - matchExpressions:
              - key: app
                operator: In
                values:
                - my-apps
      topologySpreadConstraints:
        - maxSkew: 1
          topologyKey: topology.kubernetes.io/zone
          whenUnsatisfiable: DoNotSchedule
          labelSelector:
            matchLabels:
              app: my-apps
              version: v1
...

I've done add label for those 3 nodes and this configuration works well at first time. But when it comes to update the deployment and rolling update, The pods on nodes going to imbalance.

zone a : 2 pod
zone b : 1 pod
zone c : 0 pod

I've done playing with podAntiAffinity but its return as pending if I use hard affinity and still imbalance if I use soft affinity. Any suggestion best practice for this case? did I missed something?

答案1

得分: 0

Rolling updates会导致一些Pod从节点中删除,同时添加其他Pod。这可能会导致节点上的Pod变得不平衡,因为已经存在于节点上的Pod将保留,但在更新期间添加的Pod可能会是不同的。为防止这种情况发生,重要的是在滚动更新策略中使用maxUnavailable选项。这允许您指定在滚动更新期间可以从节点中删除的最大Pod数,以确保每个节点上的Pod保持平衡。

kubectl apply -f deployment.yaml --strategy=RollingUpdate --strategy-rolling-update-maxUnavailable=1

此命令将创建或更新一个具有滚动更新策略的部署,并将maxUnavailable选项设置为1。这将确保在滚动更新期间从节点中最多删除1个Pod,从而保持节点之间的Pod平衡。请尝试并告诉我是否有效。

如果您要缩减Pod数量,根据官方文档的限制:

无法保证在删除Pod时约束条件仍然得到满足。例如,缩减部署可能导致Pod的分布不平衡。您可以使用工具,如Descheduler来重新平衡Pod的分布。

英文:

Rolling updates cause some pods to be removed from nodes and others to be added. This can cause the pods on nodes to become imbalanced, as the pods that were already on the nodes will remain, but the pods that are added during the update will likely be different. To prevent this from happening, it is important to use the maxUnavailable option in the rolling update strategy. This allows you to specify the maximum number of pods that can be removed from a node during the rolling update, ensuring that the pods on each node remain balanced.

kubectl apply -f deployment.yaml --strategy=RollingUpdate  --strategy-rolling-update-maxUnavailable=1

This command will create or update a deployment with the rolling update strategy, and the maxUnavailable option set to 1.This will ensure that no more than 1 pod is removed from a node during the rolling update, thus keeping the pods across nodes balanced.Try it and let me know if this works

If you are scaling down the pods, as per official doc limitations:

There's no guarantee that the constraints remain satisfied when Pods are removed. For example, scaling down a Deployment may result in an imbalanced Pods distribution.You can use a tool such as the Descheduler to rebalance the Pods distribution.

huangapple
  • 本文由 发表于 2023年2月6日 13:09:58
  • 转载请务必保留本文链接:https://go.coder-hub.com/75357522.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定