DevOps管道不更新Pods。

huangapple go评论61阅读模式
英文:

DevOps Pipeline does not update pods

问题

I have a AKS infrastructure where I deploy my projects using DevOps pipeline.

Unfortunately my procedure is:

  1. Delete pods manually
  2. Run pipeline

There is no better way?

My pipeline is:

trigger: none

resources:
  - repo: self
pr:
  - develop

variables:
  *

stages:
  - stage: Build
    displayName: Build Stage
    jobs:
      - job: Build
        displayName: Build
        pool:
          vmImage: $(vmImageName)
        steps:
          - task: MavenAuthenticate@0
          - task: Maven@3
          - task: Docker@2
          - upload: manifests
            artifact: manifests
  - stage: Deploy
    displayName: Deploy stage
    jobs:
      - deployment: Deploy
        displayName: Deploy
        workspace:
          clean: all
        pool: Default
        environment: $(clusterEnv)
        strategy:
          runOnce:
            deploy:
              steps:
                - task: KubernetesManifest@0
                  displayName: Deploy to kubernetes cluster
                  inputs:
                    action: deploy
                    namespace: $(k8sNamespace)
                    kubernetesServiceConnection: $(kubernetesServiceConnection)
                    manifests: |
                      $(Pipeline.Workspace)/manifests/myManifest.yml
                    containers: $(containerRegistry)/$(imageRepo):latest

Do I have something wrong at deploy stage? or forgeting something?

thks

英文:

I have a AKS infrastructure where I deploy my projects using DevOps pipeline.

Unfortunately my procedure is:

  1. Delete pods manually
  2. Run pipeline

There is no better way?

My pipeline is:

trigger: none

resources:
  - repo: self
pr:
  - develop

variables:
  *

stages:
  - stage: Build
    displayName: Build Stage
    jobs:
      - job: Build
        displayName: Build
        pool:
          vmImage: $(vmImageName)
        steps:
          - task: MavenAuthenticate@0
          - task: Maven@3
          - task: Docker@2
          - upload: manifests
            artifact: manifests
  - stage: Deploy
    displayName: Deploy stage
    jobs:
      - deployment: Deploy
        displayName: Deploy
        workspace:
          clean: all
        pool: Default
        environment: $(clusterEnv)
        strategy:
          runOnce:
            deploy:
              steps:
                - task: KubernetesManifest@0
                  displayName: Deploy to kubernetes cluster
                  inputs:
                    action: deploy
                    namespace: $(k8sNamespace)
                    kubernetesServiceConnection: $(kubernetesServiceConnection)
                    manifests: |
                      $(Pipeline.Workspace)/manifests/myManifest.yml
                    containers: $(containerRegistry)/$(imageRepo):latest

Do I have something wrong at deploy stage? or forgeting something?

thks

答案1

得分: 1

你所经历的是使用最新标签的效果。

Kubernetes 没有检测到清单的变化,它不知道最新标签的镜像已更改...

如果它认为清单中的部署没有变化,那么容器将不会重新创建。

通常不建议使用 latest 标签,所以建议不再使用它... 这将解决你的问题。

英文:

What you're experiencing is the effect of using the latest tag.

Kubernetes doesn't see a change to the manifest, it doesn't know that the image with the latest tag has changed...

If it doesn't think the deployment from the manifest is different then the pods will not be recreated.

It's generally bad practice to use latest, so I'd suggest moving away from that... which in turn resolves your problem.

huangapple
  • 本文由 发表于 2023年5月22日 23:22:31
  • 转载请务必保留本文链接:https://go.coder-hub.com/76307691.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定