复制 PVC 文件到本地而无需专用的 Pod。

huangapple go评论50阅读模式
英文:

Copy PVC files locally without a dedicated pod

问题

我们有一个由许多Kubernetes CronJobs写入的PVC。我们希望定期将这些数据复制到本地。通常情况下,可以使用kubectl cp来执行此类任务,但由于没有挂载PVC的活动运行中的Pod,这是不可能的。

我们一直在使用一个修改过的版本的此gist脚本kubectl-run-with-pvc.sh来创建一个临时Pod(运行sleep 300),然后从这个临时Pod中使用kubectl cp来获取PVC数据。这种方法“有效”,但似乎有些笨拙。

是否有更加优雅的方法来实现这个目标?

英文:

We have a PVC that is written to by many k8s cronjobs. We'd like to periodically copy this data locally. Ordinarily one would use kubectl cp to do such tasks, but since there's no actively running pod with the PVC mounted, this is not possible.

We've been using a modified version of this gist script kubectl-run-with-pvc.sh to create a temporary pod (running sleep 300) and then kubectl cp from this temporary pod to get the PVC data. This "works" but seems kludgey.

Is there a more elegant way to achieve this?

答案1

得分: 2

I can help translate the provided text into English. Here's the translation:

"我可以提议使用NFS而不是PVC吗?

如果您没有NFS服务器,您可以在k8s集群中运行一个,使用此镜像 @ https://hub.docker.com/r/itsthenetwork/nfs-server-alpine。集群内的NFS服务器确实使用PVC进行存储,但您的Pod应该使用NFS进行挂载。

也就是说,从 Pod --> PVC 变为 Pod --> NFS --> PVC

以下是我经常使用的脚本,用于创建专用的集群内NFS服务器(只需相应地修改脚本顶部的变量):

export NFS_NAME="nfs-share"
export NFS_SIZE="10Gi"
export NFS_SERVER_IMAGE="itsthenetwork/nfs-server-alpine:latest"
export STORAGE_CLASS="thin-disk"

kubectl apply -f - <<EOF
apiVersion: v1
kind: Service
metadata:
  name: ${NFS_NAME}
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
spec:
  ports:
  - name: tcp-2049
    port: 2049
    protocol: TCP
  - name: udp-111
    port: 111
    protocol: UDP
  selector:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
  name: ${NFS_NAME}
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: ${NFS_SIZE}
  storageClassName: ${STORAGE_CLASS}
  volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${NFS_NAME}
  labels:
    app.kubernetes.io/name: nfs-server
    app.kubernetes.io/instance: ${NFS_NAME}
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: nfs-server
      app.kubernetes.io/instance: ${NFS_NAME}
  template:
    metadata:
      labels:
        app.kubernetes.io/name: nfs-server
        app.kubernetes.io/instance: ${NFS_NAME}
    spec:
      containers:
      - name: nfs-server
        image: ${NFS_SERVER_IMAGE}
        ports:
        - containerPort: 2049
          name: tcp
        - containerPort: 111
          name: udp
        securityContext:
          privileged: true
        env:
        - name: SHARED_DIRECTORY
          value: /nfsshare
        volumeMounts:
        - name: pvc
          mountPath: /nfsshare
      volumes:
      - name: pvc
        persistentVolumeClaim:
          claimName: ${NFS_NAME}
EOF

要在您的Pod内挂载NFS,首先需要获取其服务IP:

export NFS_NAME="nfs-share"
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)

然后:

containers:
  - name: apache
    image: apache
    volumeMounts:
      - mountPath: /var/www/html/
        name: nfs-vol
        subPath: html
volumes:
  - name: nfs-vol
    nfs: 
      server: $NFS_IP
      path: /

这样,您不仅拥有一个永久运行的Pod(即NFS服务器Pod),以便执行 kubectl cp,还有机会将相同的NFS卷同时挂载到多个Pod中,因为NFS不具有大多数PVC驱动程序具有的单一挂载限制。

注意:我已经使用这种集群内NFS服务器技术将近5年,支持生产级别的流量量,没有出现问题。"

英文:

May I propose to use NFS instead of PVC?

If you do not have an NFS server, you can run one inside k8s cluster using this image @ https://hub.docker.com/r/itsthenetwork/nfs-server-alpine. The in-cluster NFS server indeed uses PVC for its storage but your pods should mount using NFS instead.

Meaning, from pod --&gt; PVC to pod --&gt; NFS --&gt; PVC.

Here is the script that I quite often use to created dedicated in-cluster NFS servers (just modify the variables at the top of the script accordingly):

export NFS_NAME=&quot;nfs-share&quot;
export NFS_SIZE=&quot;10Gi&quot;
export NFS_SERVER_IMAGE=&quot;itsthenetwork/nfs-server-alpine:latest&quot;
export STORAGE_CLASS=&quot;thin-disk&quot;
kubectl apply -f - &lt;&lt;EOF
apiVersion: v1
kind: Service
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
ports:
- name: tcp-2049
port: 2049
protocol: TCP
- name: udp-111
port: 111
protocol: UDP
selector:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
name: ${NFS_NAME}
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: ${NFS_SIZE}
storageClassName: ${STORAGE_CLASS}
volumeMode: Filesystem
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: ${NFS_NAME}
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
replicas: 1
selector:
matchLabels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
template:
metadata:
labels:
app.kubernetes.io/name: nfs-server
app.kubernetes.io/instance: ${NFS_NAME}
spec:
containers:
- name: nfs-server
image: ${NFS_SERVER_IMAGE}
ports:
- containerPort: 2049
name: tcp
- containerPort: 111
name: udp
securityContext:
privileged: true
env:
- name: SHARED_DIRECTORY
value: /nfsshare
volumeMounts:
- name: pvc
mountPath: /nfsshare
volumes:
- name: pvc
persistentVolumeClaim:
claimName: ${NFS_NAME}
EOF

To mount the NFS inside your pod, first you need to get its service IP:

export NFS_NAME=&quot;nfs-share&quot;
export NFS_IP=$(kubectl get --template={{.spec.clusterIP}} service/$NFS_NAME)

Then:

  containers:
- name: apache
image: apache
volumeMounts:
- mountPath: /var/www/html/
name: nfs-vol
subPath: html
volumes:
- name: nfs-vol
nfs: 
server: $NFS_IP
path: /

This way, not only you have a permanently running pod (which is the NFS server pod) for you to do the kubectl cp, you also have the opportunity to mount the same NFS volume to multiple pods concurrently since NFS does not have the single-mount restriction that most PVC drivers have.

N.B: I have been using this in-cluster NFS server technique for almost 5 years with no issues supporting production-grade traffic volumes.

答案2

得分: 1

你可以使用Restic来备份你的文件系统,并在需要时恢复它。

创建备份时,你可以编写一个类似这样的作业:
首先创建初始化作业:

apiVersion: batch/v1
kind: Job
metadata:
  name: restic-init-job
spec:
  template:
    spec:
      containers:
      - image: restic/restic:0.14.0
        name: restic
        env:
        - name: RESTIC_PASSWORD
          value: "123"
        command: ["restic"]
        args: ["init", "--repo", "/tmp/backup"]
        volumeMounts:
        - name: restic-backup
          mountPath: /tmp/backup/
      restartPolicy: OnFailure

      volumes:
       - name: restic-backup
         persistentVolumeClaim:
           claimName: restic-claim

然后备份:

apiVersion: batch/v1
kind: Job
metadata:
  name: restic-backup
spec:
  template:
    spec:
      containers:
      - image: restic/restic:0.14.0
        name: restic
        env:
        - name: RESTIC_PASSWORD
          value: "123"
        command: ["restic"]
        args: ["--repo", "/tmp/backup", "backup", "/bitnami/postgresql"]
        volumeMounts:
        - name: html
          mountPath: /bitnami/postgresql
          readOnly: true
        - name: restic-backup
          mountPath: /tmp/backup/
      restartPolicy: OnFailure

      volumes:
       - name: html
         persistentVolumeClaim:
           claimName: data-postgres-0
       - name: restic-backup
         persistentVolumeClaim:
           claimName: restic-claim

恢复时:

apiVersion: batch/v1
kind: Job
metadata:
  name: restic-restore
spec:
  template:
    spec:
      containers:
      - image: restic/restic:0.14.0
        name: restic
        env:
        - name: RESTIC_PASSWORD
          value: "123"
        command: ["restic"]
        args: ["--repo", "/tmp/backup", "restore", "latest", "--target", "/"]
        volumeMounts:
        - name: html
          mountPath: /bitnami/postgresql
        - name: restic-backup
          mountPath: /tmp/backup/
      restartPolicy: OnFailure

      volumes:
       - name: html
         persistentVolumeClaim:
           claimName: data-postgres-0
       - name: restic-backup
         persistentVolumeClaim:
           claimName: restic-claim
英文:

You can use Restic to backup your filesystem and restore it wherever you need to.

For creating backup, you can write a job like this:
first create the init job:

apiVersion: batch/v1
kind: Job
metadata:
name: restic-init-job
spec:
template:
spec:
containers:
- image: restic/restic:0.14.0
name: restic
env:
- name: RESTIC_PASSWORD
value: &quot;123&quot;
command: [&quot;restic&quot;]
args: [&quot;init&quot;, &quot;--repo&quot;, &quot;/tmp/backup&quot;]
volumeMounts:
- name: restic-backup
mountPath: /tmp/backup/
restartPolicy: OnFailure
volumes:
- name: restic-backup
persistentVolumeClaim:
claimName: restic-claim

then backup:

apiVersion: batch/v1
kind: Job
metadata:
name: restic-backup
spec:
template:
spec:
containers:
- image: restic/restic:0.14.0
name: restic
env:
- name: RESTIC_PASSWORD
value: &quot;123&quot;
command: [&quot;restic&quot;]
args: [&quot;--repo&quot;, &quot;/tmp/backup&quot;, &quot;backup&quot;, &quot;/bitnami/postgresql&quot;]
volumeMounts:
- name: html
mountPath: /bitnami/postgresql
readOnly: true
- name: restic-backup
mountPath: /tmp/backup/
restartPolicy: OnFailure
volumes:
- name: html
persistentVolumeClaim:
claimName: data-postgres-0
- name: restic-backup
persistentVolumeClaim:
claimName: restic-claim

and for restoring:

apiVersion: batch/v1
kind: Job
metadata:
name: restic-restore
spec:
template:
spec:
containers:
- image: restic/restic:0.14.0
name: restic
env:
- name: RESTIC_PASSWORD
value: &quot;123&quot;
command: [&quot;restic&quot;]
args: [&quot;--repo&quot;, &quot;/tmp/backup&quot;, &quot;restore&quot;, &quot;latest&quot;, &quot;--target&quot;, &quot;/&quot;]
volumeMounts:
- name: html
mountPath: /bitnami/postgresql
- name: restic-backup
mountPath: /tmp/backup/
restartPolicy: OnFailure
volumes:
- name: html
persistentVolumeClaim:
claimName: data-postgres-0
- name: restic-backup
persistentVolumeClaim:
claimName: restic-claim

huangapple
  • 本文由 发表于 2023年6月6日 07:06:39
  • 转载请务必保留本文链接:https://go.coder-hub.com/76410506.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定