英文:
DevOps Pipeline does not update pods
问题
I have a AKS infrastructure where I deploy my projects using DevOps pipeline.
Unfortunately my procedure is:
- Delete pods manually
- Run pipeline
There is no better way?
My pipeline is:
trigger: none
resources:
- repo: self
pr:
- develop
variables:
*
stages:
- stage: Build
displayName: Build Stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: MavenAuthenticate@0
- task: Maven@3
- task: Docker@2
- upload: manifests
artifact: manifests
- stage: Deploy
displayName: Deploy stage
jobs:
- deployment: Deploy
displayName: Deploy
workspace:
clean: all
pool: Default
environment: $(clusterEnv)
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Deploy to kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
kubernetesServiceConnection: $(kubernetesServiceConnection)
manifests: |
$(Pipeline.Workspace)/manifests/myManifest.yml
containers: $(containerRegistry)/$(imageRepo):latest
Do I have something wrong at deploy stage? or forgeting something?
thks
英文:
I have a AKS infrastructure where I deploy my projects using DevOps pipeline.
Unfortunately my procedure is:
- Delete pods manually
- Run pipeline
There is no better way?
My pipeline is:
trigger: none
resources:
- repo: self
pr:
- develop
variables:
*
stages:
- stage: Build
displayName: Build Stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: MavenAuthenticate@0
- task: Maven@3
- task: Docker@2
- upload: manifests
artifact: manifests
- stage: Deploy
displayName: Deploy stage
jobs:
- deployment: Deploy
displayName: Deploy
workspace:
clean: all
pool: Default
environment: $(clusterEnv)
strategy:
runOnce:
deploy:
steps:
- task: KubernetesManifest@0
displayName: Deploy to kubernetes cluster
inputs:
action: deploy
namespace: $(k8sNamespace)
kubernetesServiceConnection: $(kubernetesServiceConnection)
manifests: |
$(Pipeline.Workspace)/manifests/myManifest.yml
containers: $(containerRegistry)/$(imageRepo):latest
Do I have something wrong at deploy stage? or forgeting something?
thks
答案1
得分: 1
你所经历的是使用最新标签的效果。
Kubernetes 没有检测到清单的变化,它不知道最新标签的镜像已更改...
如果它认为清单中的部署没有变化,那么容器将不会重新创建。
通常不建议使用 latest 标签,所以建议不再使用它... 这将解决你的问题。
英文:
What you're experiencing is the effect of using the latest tag.
Kubernetes doesn't see a change to the manifest, it doesn't know that the image with the latest tag has changed...
If it doesn't think the deployment from the manifest is different then the pods will not be recreated.
It's generally bad practice to use latest, so I'd suggest moving away from that... which in turn resolves your problem.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论