英文:
Kubernetes rolling update delete old pod before starting a new one
问题
我有一个部署,其应用程序在文件上创建锁定,并且如果存在锁定则会崩溃。我如何要求Kubernetes在启动新的Pod之前删除旧的Pod?
我知道通常情况下,您希望相反的情况,先启动新的Pod,只有当它准备好后才删除旧的Pod,以避免停机时间。对于这种情况,我不在乎,因为在开发环境中使用的是Typesense,我们在暂存和生产环境中使用SaaS。
感谢任何帮助!
英文:
I've a deployment whose app makes a lock on a file, and crash if there's any existing lock. How can I ask Kubernetes to remove the old pod before spinning up the new one ?
I know you usually want the opposite, spinning up the new, and only when it's ready remove the old, to avoid downtime. For this case I don't care, it's typesense in a developments environments, we're using the SaaS for staging & production.
Thanks for any help !
答案1
得分: 2
最简单的方法是将升级策略设置为 Recreate
。
apiVersion: apps/v1
kind: Deployment
metadata:
[...]
spec:
selector:
[...]
strategy:
type: Recreate
template:
[...]
来源于文档:
> 当 .spec.strategy.type==Recreate 时,所有现有的 Pod 在创建新的 Pod 之前都会被终止。
英文:
Easiest way to achieve that is to set the upgrade strategy to Recreate
.
apiVersion: apps/v1
kind: Deployment
metadata:
[...]
spec:
selector:
[...]
strategy:
type: Recreate
template:
[...]
From docs:
> All existing Pods are killed before new ones are created when .spec.strategy.type==Recreate.
答案2
得分: 0
如果您将副本数设置为0,通常应该实现如下效果:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 0
英文:
If you set your replicas to 0 this should generally achieve that:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 0
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论