英文:
How to trigger to re-creation of deployment without any command
问题
抱歉,我明白你只需要翻译代码部分。以下是代码部分的翻译:
kubectl delete deployment <deploymentname>
annotations:
kubectl.kubernetes.io/restartedAt:
Command 'kubectl --kubeconfig=/tmp/tmpxv384dq8 apply --validate=False -f /tmp/tmp9ku4lg20' failed with: The Deployment "pgw-prometheus-pushgateway" is invalid: spec.selector: Invalid value: v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"pgw", "app.kubernetes.io/name":"prometheus-pushgateway"}, MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
英文:
Apologies in advance, I am quite new to Kubernetes, we have some Helm charts that we use to implement monitoring stacks to various environments using Kubernetes. I am updating the helm charts versions and one version, in particular, requires deployment to be re-created.
I tried running
kubectl delete deployment <deploymentname>
and then implementing the update which works but I need a way to trigger deployment creation from values.yaml file so that it triggers in all of the environments we have when I run the pipeline instead of me going to all of them manually and deleting deployments beforehand.
I also tried adding an annotation like
annotations:
kubectl.kubernetes.io/restartedAt:
but that didn't work.
The error I am getting :
13:40:55 2023-02-22 13:40:55,354 - ERROR -
Command 'kubectl --kubeconfig=/tmp/tmpxv384dq8 apply --validate=False
-f /tmp/tmp9ku4lg20' failed with: The Deployment "pgw-prometheus-pushgateway"
is invalid: spec.selector: Invalid value:
v1.LabelSelector{MatchLabels:map[string]string{"app.kubernetes.io/instance":"pgw",
"app.kubernetes.io/name":"prometheus-pushgateway"},
MatchExpressions:[]v1.LabelSelectorRequirement(nil)}: field is immutable
答案1
得分: 0
如果我理解正确,您有一些通过Helm Charts管理的Kubernetes应用程序,并且您希望其中一个被"强制"重新启动。
我猜这是您的要求,因为如果您的应用程序的values.yaml没有"重大"更改,Helm Upgrade命令不会触发新版本的重新部署/部署。
为了确保每次Upgrade命令都会发生重新启动,您可以通过添加以下内容来修改Deployment模板:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
更多信息请参考:https://v3.helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
英文:
If I understand correctly, you have a number of Kubernetes applications managed via Helm Charts and you want one of them to be "forced" restarted.
I guess this is your request, because it happens that if there are no "significant" changes to the values.yaml of your application, the Helm Upgrade command doesn’t trigger the re-deploy/deploy of the new version.
To make sure that the reboot occurs at each Upgrade command, you can modify the Deployment template by adding:
annotations:
rollme: {{ randAlphaNum 5 | quote }}
https://v3.helm.sh/docs/howto/charts_tips_and_tricks/#automatically-roll-deployments
答案2
得分: 0
所以基本上我找到的解决方法是覆盖部署名称。我在值 yaml 文件中添加了 fullnameOverride
值(这在 Helm 图表的值中提供),并且将名称稍微更改,这会删除旧的部署并创建新的部署,而无需事先执行 kubectl delete deploy
。
英文:
So basically the solution I found was overriding the deployment name.
I added fullnameOverride
value to values yaml file (this is provided in the helm charts values) and changed the name slightly from the previous and that deleted the old deployment and created the new one without me doing kubectl delete deploy
beforehand.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论