有没有办法配置一个容器的持久路径/位置来收集核心转储文件?

huangapple go评论50阅读模式
英文:

Is there any way to configure persistent path/location to collect core dump of a container

问题

在GKE中,是否有官方支持来配置特定部署/容器的可下载核心转储位置?

假设我们可以有一个类似core-path的东西:

    spec:
      containers:
        - name: nebula
        - core-path: gs://some-bucket/cores/

这样,每次容器崩溃时,我都可以从GCS位置获取其核心文件。如果在GKE/GCP中没有此功能,我可以采取什么替代方法来收集核心转储?

谢谢您的答案/指引。

英文:

Is there any official support to configure a downloadable core dump location for a specific deployment/container in GKE?

Imagine we can have something like core-path:

    spec:
      containers:
        - name: nebula
        - core-path: gs://some-bucket/cores/

So that every time the container crashes, I can grab its core file from GCS location.
Sounds very useful feature, if not available in GKE/GCP, what alternative I can do to collect the core dumps?

Thanks for your answers/pointers.

答案1

得分: 1

GKE提供了配置特定部署/容器的可下载核心转储位置的支持。如果Pod/容器/节点重新启动或崩溃,我们将丢失在核心转储文件夹中生成的所有数据。

因此,GCP引入了一个名为Core Dump Collector Agent的代理程序,该代理程序存在于每个节点中。这个核心转储收集代理程序将作为守护进程Pod / Sidecar容器存在,它将从创建核心转储的位置收集数据,并将相同的数据上传到外部存储设备,如Google Storage(GCS)、本地文件存储等。

Pod内运行的每个应用程序将在其文件结构中的一个位置写入核心转储。这个位置将作为持久卷挂载在Kubernetes集群中。核心转储代理程序将监视挂载卷位置,因此只要持久卷中创建了一些文件,相同的文件就会上传到位于集群外部的Google Storage中,因此我们能够获得一个集中的路径,可以获取跨多个集群的所有核心转储。我们也可以挂载到一个空目录,而不是持久卷。选择持久卷是为了在某些意外情况下不丢失任何核心转储。

有关更多信息,您可以参考Jayant Chaudhury撰写的文章

英文:

GKE provides support to configure a downloadable core dump location for a specific deployment/container. If the Pod/Container/Node restarts or crashes then we will be losing all the data which was generated in the core-dump folder.

So, GCP introduced an agent called Core Dump Collector Agent in each Node. This Core Dump collector agent will exist as a Daemon Pod / Sidecar container which will collect the data from the location where the core-dumps are created and upload the same data to an external Storage Device which is Google Storage(GCS), Local File Storage etc.

Each Application running inside the Pod will write the core-dump in a location in its file structure. This location will be Mounted Persistent Volume within the Kubernetes Cluster. The Core Dump Agent will watch the Mounted Volume Location, so as soon as some files are getting created in Persistent Volume, the same file gets uploaded in the Google Storage which is outside the cluster and hence we are able to get a centralized path where we can get all core dumps across multiple clusters.We can also mount to an Empty directory as well rather than Persistent Volume. Persistent Volume is taken so that we do not lose any core-dumps in some unforeseen circumstances.

For more information you can refer to the article written by Jayant Chaudhury.

huangapple
  • 本文由 发表于 2023年7月27日 15:44:02
  • 转载请务必保留本文链接:https://go.coder-hub.com/76777525.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定