英文:
How to attribute Kubernetes resource creation
问题
我遇到一个奇怪的问题,我之前创建的一个Kubernetes资源,然后通过 kubectl
删除了,但它神秘地又回来了。这是一个普通的Kubernetes集群(没有运算符),我应该是集群的唯一用户。
$ kubectl get secret app-secret
NAME TYPE DATA AGE
app-secret Opaque 2 1d
$ kubectl delete secret app-secret
secret "app-secret" deleted
第二天:
$ kubectl get secret app-secret
NAME TYPE DATA AGE
app-secret Opaque 2 3h
我想追踪是谁/哪个实体把它带回来的。
我尝试检查 CronJobs
,因为资源重新创建的时间似乎是每24小时的固定时间。但在那里没有找到线索。
我还尝试检查Secret的YAML定义,希望能找到某种“创建者”或“作者”字段。
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: app-secret
creationTimestamp: "2023-04-04T04:18:33Z"
namespace: default
resourceVersion: "39713521316"
uid: 36eb0332-ea37-43d3-8034-4e47a8ebcd43
data:
foo: YmFy
但在那里也没有找到线索。
我还尝试使用 kubectl event
,但是事件似乎不包括 Secrets
。
这是CRD:
kubectl get crd
NAME CREATED AT
# 省略了其余的CRD
我刚刚再次删除了它,但是在24小时内它可能会再次出现。
有哪些方法可以追踪到底是什么原因造成了这个问题?
英文:
I'm experiencing a weird issue where a k8s resource I previously created, then subsequently deleted via kubectl
is being mysteriously coming back. It's a vanilla k8s cluster (no operators), and I should be the only user of the cluster.
$ kubectl get secret app-secret
NAME TYPE DATA AGE
app-secret Opaque 2 1d
$ kubectl delete secret app-secret
secret "app-secret" deleted
The next day:
$ kubectl get secret app-secret
NAME TYPE DATA AGE
app-secret Opaque 2 3h
I'd like to track down who/which entity brought it back.
I tried to review CronJobs
because the resource recreation time appears to be the same walltime every 24 hrs. No luck there.
I also tried checking the YAML definition of the secret in hopes of finding some kind of 'creator' or 'author' field.
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: app-secret
creationTimestamp: "2023-04-04T04:18:33Z"
namespace: default
resourceVersion: "39713521316"
uid: 36eb0332-ea37-43d3-8034-4e47a8ebcd43
data:
foo: YmFy
No luck there either.
I also tried kubectl event
, but the events don't appear to include Secrets
.
Here are the CRDs
kubectl get crd
NAME CREATED AT
alertmanagers.monitoring.coreos.com 2020-11-27T16:26:59Z
apiservices.management.cattle.io 2021-10-06T19:38:28Z
apps.catalog.cattle.io 2020-11-27T16:27:04Z
authconfigs.management.cattle.io 2021-10-06T19:38:30Z
bgpconfigurations.crd.projectcalico.org 2020-11-27T16:05:00Z
bgppeers.crd.projectcalico.org 2020-11-27T16:05:00Z
blockaffinities.crd.projectcalico.org 2020-11-27T16:05:00Z
caliconodestatuses.crd.projectcalico.org 2022-10-04T02:32:09Z
clusterflows.logging.banzaicloud.io 2022-01-14T14:43:04Z
clusterinformations.crd.projectcalico.org 2020-11-27T16:05:01Z
clusteroutputs.logging.banzaicloud.io 2022-01-14T14:43:05Z
clusterregistrationtokens.management.cattle.io 2021-10-06T19:38:28Z
clusterrepos.catalog.cattle.io 2020-11-27T16:27:04Z
clusters.management.cattle.io 2020-11-27T16:27:03Z
features.management.cattle.io 2020-11-27T16:27:04Z
felixconfigurations.crd.projectcalico.org 2020-11-27T16:04:49Z
flows.logging.banzaicloud.io 2022-01-14T14:43:04Z
globalnetworkpolicies.crd.projectcalico.org 2020-11-27T16:05:01Z
globalnetworksets.crd.projectcalico.org 2020-11-27T16:05:01Z
groupmembers.management.cattle.io 2021-10-06T19:38:31Z
groups.management.cattle.io 2021-10-06T19:38:31Z
hostendpoints.crd.projectcalico.org 2020-11-27T16:05:01Z
ipamblocks.crd.projectcalico.org 2020-11-27T16:04:59Z
ipamconfigs.crd.projectcalico.org 2020-11-27T16:05:00Z
ipamhandles.crd.projectcalico.org 2020-11-27T16:05:00Z
ippools.crd.projectcalico.org 2020-11-27T16:05:00Z
ipreservations.crd.projectcalico.org 2022-10-04T02:32:16Z
kubecontrollersconfigurations.crd.projectcalico.org 2022-10-04T02:32:17Z
loggings.logging.banzaicloud.io 2022-01-14T14:43:04Z
navlinks.ui.cattle.io 2021-10-06T19:38:28Z
networkpolicies.crd.projectcalico.org 2020-11-27T16:05:02Z
networksets.crd.projectcalico.org 2020-11-27T16:05:02Z
nodepools.kube.cloud.ovh.com 2020-11-27T16:02:54Z
operations.catalog.cattle.io 2020-11-27T16:27:04Z
outputs.logging.banzaicloud.io 2022-01-14T14:43:05Z
preferences.management.cattle.io 2020-11-27T16:27:04Z
prometheuses.monitoring.coreos.com 2020-11-27T16:26:58Z
prometheusrules.monitoring.coreos.com 2020-11-27T16:26:59Z
servicemonitors.monitoring.coreos.com 2020-11-27T16:27:00Z
settings.management.cattle.io 2020-11-27T16:27:04Z
tokens.management.cattle.io 2021-10-06T19:38:31Z
userattributes.management.cattle.io 2021-10-06T19:38:31Z
users.management.cattle.io 2021-10-06T19:38:31Z
volumesnapshotclasses.snapshot.storage.k8s.io 2020-11-27T16:05:48Z
volumesnapshotcontents.snapshot.storage.k8s.io 2020-11-27T16:05:48Z
volumesnapshots.snapshot.storage.k8s.io 2020-11-27T16:05:48Z
I just deleted it again, but in 24h it's likely going to be back.
What are ways I might track this down?
答案1
得分: 1
检查是否正在使用CD工具,如ArgoCD,这些工具可以重新创建资源,以保持应用程序处于所需状态。
还要检查是否安装了CRDs,某些CRDs可能负责保持某些资源处于所需状态。
英文:
Check if you are using CD tools like ArgoCD, these tools could recreate resources to keep applications in desired state.
Also check for any installed CRDs, some CRDs might take responsibility of keeping some resources in a desired state.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论