DevOps管道不更新Pods。

huangapple go评论97阅读模式
英文:

DevOps Pipeline does not update pods

问题

I have a AKS infrastructure where I deploy my projects using DevOps pipeline.

Unfortunately my procedure is:

  1. Delete pods manually
  2. Run pipeline

There is no better way?

My pipeline is:

  1. trigger: none
  2. resources:
  3. - repo: self
  4. pr:
  5. - develop
  6. variables:
  7. *
  8. stages:
  9. - stage: Build
  10. displayName: Build Stage
  11. jobs:
  12. - job: Build
  13. displayName: Build
  14. pool:
  15. vmImage: $(vmImageName)
  16. steps:
  17. - task: MavenAuthenticate@0
  18. - task: Maven@3
  19. - task: Docker@2
  20. - upload: manifests
  21. artifact: manifests
  22. - stage: Deploy
  23. displayName: Deploy stage
  24. jobs:
  25. - deployment: Deploy
  26. displayName: Deploy
  27. workspace:
  28. clean: all
  29. pool: Default
  30. environment: $(clusterEnv)
  31. strategy:
  32. runOnce:
  33. deploy:
  34. steps:
  35. - task: KubernetesManifest@0
  36. displayName: Deploy to kubernetes cluster
  37. inputs:
  38. action: deploy
  39. namespace: $(k8sNamespace)
  40. kubernetesServiceConnection: $(kubernetesServiceConnection)
  41. manifests: |
  42. $(Pipeline.Workspace)/manifests/myManifest.yml
  43. containers: $(containerRegistry)/$(imageRepo):latest

Do I have something wrong at deploy stage? or forgeting something?

thks

英文:

I have a AKS infrastructure where I deploy my projects using DevOps pipeline.

Unfortunately my procedure is:

  1. Delete pods manually
  2. Run pipeline

There is no better way?

My pipeline is:

  1. trigger: none
  2. resources:
  3. - repo: self
  4. pr:
  5. - develop
  6. variables:
  7. *
  8. stages:
  9. - stage: Build
  10. displayName: Build Stage
  11. jobs:
  12. - job: Build
  13. displayName: Build
  14. pool:
  15. vmImage: $(vmImageName)
  16. steps:
  17. - task: MavenAuthenticate@0
  18. - task: Maven@3
  19. - task: Docker@2
  20. - upload: manifests
  21. artifact: manifests
  22. - stage: Deploy
  23. displayName: Deploy stage
  24. jobs:
  25. - deployment: Deploy
  26. displayName: Deploy
  27. workspace:
  28. clean: all
  29. pool: Default
  30. environment: $(clusterEnv)
  31. strategy:
  32. runOnce:
  33. deploy:
  34. steps:
  35. - task: KubernetesManifest@0
  36. displayName: Deploy to kubernetes cluster
  37. inputs:
  38. action: deploy
  39. namespace: $(k8sNamespace)
  40. kubernetesServiceConnection: $(kubernetesServiceConnection)
  41. manifests: |
  42. $(Pipeline.Workspace)/manifests/myManifest.yml
  43. containers: $(containerRegistry)/$(imageRepo):latest

Do I have something wrong at deploy stage? or forgeting something?

thks

答案1

得分: 1

你所经历的是使用最新标签的效果。

Kubernetes 没有检测到清单的变化,它不知道最新标签的镜像已更改...

如果它认为清单中的部署没有变化,那么容器将不会重新创建。

通常不建议使用 latest 标签,所以建议不再使用它... 这将解决你的问题。

英文:

What you're experiencing is the effect of using the latest tag.

Kubernetes doesn't see a change to the manifest, it doesn't know that the image with the latest tag has changed...

If it doesn't think the deployment from the manifest is different then the pods will not be recreated.

It's generally bad practice to use latest, so I'd suggest moving away from that... which in turn resolves your problem.

huangapple
  • 本文由 发表于 2023年5月22日 23:22:31
  • 转载请务必保留本文链接:https://go.coder-hub.com/76307691.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定