英文:
Kubernetes resource requests/limits based on time plan
问题
我们可以为Pod定义内存和CPU的请求/限制,这不是问题。是否可以根据时间计划来定义不同的内存和CPU请求/限制?我考虑类似以下的方式:
早上8点 - 晚上8点
- CPU请求 0.5,限制 2
- 内存请求 1Gi,限制 2Gi
晚上8点01分 - 早上7点59分
- CPU请求 0.1,限制 0.5
- 内存请求 500Mi,限制 1Gi
这可以帮助我们根据白天高性能实时处理和夜晚相同Pod的较低性能来进行性能调优。您是否解决过类似的挑战?
英文:
We can define memory and cpu requests/limits for pods, it is not a problem. Is it possible to define different memory and cpu requests/limits based on time plan? I think about something like that:
8AM - 8PM
- CPU request 0.5, limit 2
- RAM request 1Gi, limit 2Gi
8:01PM - 7:59AM
- CPU request 0.1, limit 0.5
- RAM request 500Mi, limit 1Gi
It can help us with tuning of performance based on e.g. high performance real-time processing during the day and lower performance the same pods during the night. Did you solve similar challenge?
答案1
得分: 2
这不是 Kubernetes 核心支持的功能。
我可以看到两种解决这个挑战的方法:
- 使用一个
cronJob
和一个包含kubectl
二进制文件的镜像来调整资源的上下限
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: patch-memory
spec:
schedule: "0 8 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: scale
image: <your-repo>/kubectl:<tag>
command:
- kubectl.sh
- kubectl
- patch
- deployment
- <your deployment>
- --patch
- '{"spec": {"template": {"spec": {"containers": [{"name": "<your container>": {"resources": {"requests": {"memory": "XXXMi"}}}}]}}}}'
restartPolicy: OnFailure
你还可以考虑使用水平 Pod 自动扩展:这允许你根据资源使用情况在 deployment
或 statefulsets
上添加更多的 pod
。
另一方面,你也可以使用VPA。它会根据工作负载的资源使用情况动态设置资源请求/限制,不像你提到的那种 cron 风格。
英文:
This is not something supported by kubernetes core.
This has been discussed many times (here or there for example)
I can see two ways of solving this challenge :
- Using a
cronJob
and an image that embeds akubectl
binary to patch up/down your ressources
apiVersion: batch/v2alpha1
kind: CronJob
metadata:
name: patch-memory
spec:
schedule: "0 8 * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: scale
image: <your-repo>/kubectl:<tag>
command:
- kubectl.sh
- kubectl
- patch
- deployment
- <your deployment>
- --patch
- '{"spec": {"template": {"spec": {"containers": [{"name": "<your container>": {"resources": {"requests": {"memory": "XXXMi" }}}}]}}}}'
restartPolicy: OnFailure
- Using a custom controler like this one or this one or others you can find online. I have no experience using those projects myself.
You could also consider using Horizontal Pod Autoscaling : This allows you to add more pod
on a deployment
or statefulsets
based on resources usage.
On another hand, you could also use VPA. This will dynamically set resources requests/limits on your workload based on their resources usage - not in a cron style like you mentioned
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论