Add new node pool to GKE cluster 向 GKE 集群添加新的节点池

huangapple go评论84阅读模式
英文:

Add new node pool to GKE cluster

问题

I am running single-node GKE cluster with configuration of 4vCPU & 16 GB Memory. Now i am planning to add one more node pool 1 vCPU and 3.75 GB of RAM.

Right now on a single node, I am running load like Elasticsearch, Redis, Rabbitmq with stateful sets having disk attach.

I have not added any affinity & anti-affinity in Pod configuration. If i will be adding new node may possible some of pods schedule to new node. While i am only planning to run stateless pods on new pods.

Is there any way i can stop scheduling ES, Redis or RabbitMQ on new node without adding affinity or anything don't want to restart(Touch) pod or service. ES, Redis, RabbitMQ should have to stuck on the old node only.

英文:

I am running single-node GKE cluster with configuration of 4vCPU & 16 GB Memory. Now i am planning to add one more node pool 1 vCPU and 3.75 GB of RAM.

Right now on a single node, I am running load like Elasticsearch, Redis, Rabbitmq with stateful sets having disk attach.

I have not added any affinity & anti-affinity in Pod configuration. If i will be adding new node may possible some of pods schedule to new node. While i am only planning to run stateless pods on new pods.

Is there any way i can stop scheduling ES, Redis or RabbitMQ on new node without adding affinity or anything don't want to restart(Touch) pod or service. ES, Redis, RabbitMQ should have to stuck on the old node only.

答案1

得分: 2

nodeSelectorkubectl patch可能是解决方案。

使用新标签

您必须首先使用以下命令为运行状态工作负载的节点添加标签,例如statefullnode=true

kubectl label nodes <node-name> statefullnode=true

然后,您必须使用kubectl patch来修补在该节点上运行的每个部署:

kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"statefullnode": "true"}}}}}'

使用节点名称

如果您不想为节点添加标签,您可以简单地将节点名称用作nodeSelector的标签。例如,如果您的节点名称是my-gke-node,则运行以下命令:

kubectl patch deployments nginx-deployment -p '{"spec": {"template": {"spec": {"nodeSelector": {"kubernetes.io/hostname": "my-gke-node"}}}}}'

运行kubectl get nodes以获取集群节点的名称。

英文:

nodeSelector and kubectl patch could be the solution.

Using a new label

You must fist label the node running the statefull workloads with for instance the following label statefullnode=true using the following command:

kubectl label nodes &lt;node-name&gt; statefullnode=true

Then you must patch each deployments running on this node using kubectl patch:

kubectl patch deployments nginx-deployment -p &#39;{&quot;spec&quot;: {&quot;template&quot;: {&quot;spec&quot;: {&quot;nodeSelector&quot;: {&quot;statefullnode&quot;: &quot;true&quot;}}}}}&#39;

Using the node name

If you don't want to label your node you can simply use the node name as label for the nodeSelector.
For instance if your node's name is my-gke-node then run:

kubectl patch deployments nginx-deployment -p &#39;{&quot;spec&quot;: {&quot;template&quot;: {&quot;spec&quot;: {&quot;nodeSelector&quot;: {&quot;kubernetes.io/hostname&quot;: &quot;my-gke-node&quot;}}}}}&#39;

Run kubectl get nodes to get the names of your cluster’s nodes.

答案2

得分: 1

以下是翻译的部分:

你所需要的是"污点"(Taints)。"它们允许节点排斥一组Pod"(更多详细信息请查看:https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/)

从上面的链接中(假设node1是新节点)在你的示例中:

kubectl taint nodes node1 key=value:NoSchedule - 这意味着除非它具有匹配的容忍度(且你现有的Pod没有匹配的容忍度),否则不会有任何Pod能够调度到node1。

而在你希望在新节点上调度的新Pod中,你将应用以下内容:

tolerations:
- key: "key"
  operator: "Equal"
  value: "value"
  effect: "NoSchedule"

因此,旧的Pod将无法调度到这个新节点,只有新的Pod,并且只有在你对它们应用了容忍度的情况下,才能够在新节点上调度。

英文:

What you need is Taints. "they allow a node to repel a set of pods" ( more details here: https://kubernetes.io/docs/concepts/configuration/taint-and-toleration/ )

From the link above (assuming node1 is the new node) in your example:

kubectl taint nodes node1 key=value:NoSchedule - This means that no pod will be able to schedule onto node1 unless it has a matching toleration. (and your existing pods don't have a matching toleration).

And in one of the new pods you will want to schedule on the new node, you will apply this:

tolerations:
- key: &quot;key&quot;
  operator: &quot;Equal&quot;
  value: &quot;value&quot;
  effect: &quot;NoSchedule&quot;

Thus, the old pods won't be able to schedule to this new node, and only the new pods, and only if you apply the toleration to them, will be able to schedule on the new node.

huangapple
  • 本文由 发表于 2020年1月6日 15:15:11
  • 转载请务必保留本文链接:https://go.coder-hub.com/59608060.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定