英文:
Why Velero and minion backup restore not working for PGO cluster?
问题
我已经为我的K8s集群设置了Minio和Velero备份。一切都运作正常,我可以备份,并且我可以在Minio中看到它们。我有一个使用负载均衡服务运行的PGO操作器集群hippo。当我通过Velero还原备份时,一切似乎都正常。它创建了命名空间以及所有部署和正在运行的Pod。
然而,我无法通过PGadmin连接到我的数据库。当我删除Pod时,它没有重新创建,而是显示未绑定PVC的错误。以下是输出:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod.
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get PV
error: the server doesn't have a resource type "PV"
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO Delete Bound postgres-operator/hippo-s3-instance2-4bhf-pgdata openebs-hostpath 16m
pvc-2dd12937-a70e-40b4-b1ad-be1c9f7b39ec 5G RWO Delete Bound default/local-hostpath-pvc openebs-hostpath 6d9h
pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO Delete Bound postgres-operator/hippo-s3-instance2-xvhq-pgdata openebs-hostpath 16m
pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO Delete Bound postgres-operator/hippo-instance2-p4ct-pgdata openebs-hostpath 7m32s
pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO Delete Bound postgres-operator/hippo-instance2-s6fs-pgdata openebs-hostpath 7m33s
pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO Delete Bound postgres-operator/hippo-s3-instance2-c4rt-pgdata openebs-hostpath 16m
pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO Delete Bound postgres-operator/hippo-instance2-29gh-pgdata openebs-hostpath 7m32s
pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO Delete Bound postgres-operator/hippo-repo2 openebs-hostpath 7m30s
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 7m51s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 7m51s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 16m
hippo-s3-repo1 Pending pgo 16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m43s
hippo-instance2-29gh-0 4/4 Running 0 8m5s
hippo-instance2-p4ct-0 4/4 Running 0 8m5s
hippo-instance2-s6fs-0 4/4 Running 0 8m5s
hippo-repo-host-0 2/2 Running 0 8m19s
hippo-s3-instance2-c4rt-0 3/4 Running 0 16m
hippo-s3-repo-host-0 0/2 Pending 0 16m
pgo-7c867985c-kph6l 1/1 Running 0 16m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl delete pods hippo-s
<details>
<summary>英文:</summary>
I have set up Minio and Velero backup for my k8s cluster. Everything works fine as I can take backups and I can see them in Minio. I have a PGO operator cluster hippo running with load balancer service. When I restore a backup via Velero, everything seems okay. It creates namespaces and all the deployments and pods in running state.
However I am not able to connect to my database via PGadmin. When I delete the pod it is not recreating it but shows an error of unbound PVC.
This is the output.
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
Warning FailedScheduling 16m default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get PV
error: the server doesn't have a resource type "PV"
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-4bhf-pgdata openebs-hostpath 16m
pvc-2dd12937-a70e-40b4-b1ad-be1c9f7b39ec 5G RWO Delete Bound default/local-hostpath-pvc openebs-hostpath 6d9h
pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-xvhq-pgdata openebs-hostpath 16m
pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO Delete Bound postgres-operator/hippo-instance2- p4ct-pgdata openebs-hostpath 7m32s
pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO Delete Bound postgres-operator/hippo-instance2- s6fs-pgdata openebs-hostpath 7m33s
pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO Delete Bound postgres-operator/hippo-s3-instanc e2-c4rt-pgdata openebs-hostpath 16m
pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO Delete Bound postgres-operator/hippo-instance2- 29gh-pgdata openebs-hostpath 7m32s
pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO Delete Bound postgres-operator/hippo-repo2 openebs-hostpath 7m30s
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 7m51s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 7m51s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 7m51s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 16m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 16m
hippo-s3-repo1 Pending pgo 16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m43s
hippo-instance2-29gh-0 4/4 Running 0 8m5s
hippo-instance2-p4ct-0 4/4 Running 0 8m5s
hippo-instance2-s6fs-0 4/4 Running 0 8m5s
hippo-repo-host-0 2/2 Running 0 8m5s
hippo-s3-instance2-c4rt-0 3/4 Running 0 16m
hippo-s3-repo-host-0 0/2 Pending 0 16m
pgo-7c867985c-kph6l 1/1 Running 0 16m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 16m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl delete pods hippo-s3-repo-host-0 -n postgres-operator
pod "hippo-s3-repo-host-0" deleted
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pods -n postgres-operator NAME READY STATUS RESTARTS AGE
hippo-backup-txk9-rrk4m 0/1 Completed 0 7m57s
hippo-instance2-29gh-0 4/4 Running 0 8m19s
hippo-instance2-p4ct-0 4/4 Running 0 8m19s
hippo-instance2-s6fs-0 4/4 Running 0 8m19s
hippo-repo-host-0 2/2 Running 0 8m19s
hippo-s3-instance2-c4rt-0 3/4 Running 0 17m
hippo-s3-repo-host-0 0/2 Pending 0 2s
pgo-7c867985c-kph6l 1/1 Running 0 17m
pgo-upgrade-69b5dfdc45-6qrs8 1/1 Running 0 17m
master@masterk8s-virtual-machine:~/postgres-operator-examples-main$ kubectl get pvc -n postgres-operator NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
hippo-instance2-29gh-pgdata Bound pvc-d4629dba-b172-47ea-ab01-12a9039be571 5Gi RWO openebs-hostpath 8m45s
hippo-instance2-p4ct-pgdata Bound pvc-531c9ac7-938c-46b1-b4fa-3a7599f40038 5Gi RWO openebs-hostpath 8m45s
hippo-instance2-s6fs-pgdata Bound pvc-968d9794-e4ba-479c-9138-8fbd85422920 5Gi RWO openebs-hostpath 8m45s
hippo-repo2 Bound pvc-e79d68c3-4e2f-4314-b83f-f96c306a9b38 5Gi RWO openebs-hostpath 8m45s
hippo-s3-instance2-4bhf-pgdata Bound pvc-1ca9e092-4e84-4ca4-88e3-0050890ef101 5Gi RWO openebs-hostpath 17m
hippo-s3-instance2-c4rt-pgdata Bound pvc-987c1bd1-bf41-4180-91de-15bb5ead38ad 5Gi RWO openebs-hostpath 17m
hippo-s3-instance2-xvhq-pgdata Bound pvc-30af7f3b-7ce5-4e2a-8c68-5c701881293b 5Gi RWO openebs-hostpath 17m
hippo-s3-repo1 Pending pgo 17m
**What Do I Want?**
I want Velero to restore the full backup and I should be able to get access to my databases as I can before restore. It seems like Velero is not able to perform full backups.
Any suggestion will be appreciated
</details>
# 答案1
**得分**: 1
Velero是用于Kubernetes集群及其关联持久卷的备份和还原解决方案。虽然Velero目前不支持对数据库的完全备份和还原,请参考这些[限制][1]。它支持快照和还原持久卷。这意味着,虽然您可能无法直接还原整个数据库,但您可以还原与数据库关联的持久卷,然后使用适当的工具从快照中还原数据。此外,Velero的插件架构允许您通过自定义插件扩展Velero的功能,以添加自定义备份和还原功能。
有关备份和还原的更多信息,请参考Digital Ocean上的这篇博客,作者是[Hanif Jetha和Jamon Camisso][2]。
[1]: https://velero.io/docs/main/file-system-backup/#limitations
[2]: https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-velero
<details>
<summary>英文:</summary>
Velero is a backup and restore solution for Kubernetes clusters and their associated persistent volumes. While Velero does not currently support full backup and restore of databases Refer these [limitations][1]. It does support snapshotting and restoring persistent volumes. This means that, while you may not be able to directly restore a full database, you can restore the persistent volumes associated with the database and then use the appropriate tools to restore the data from the snapshots. Additionally, Velero's plugin architecture allows you to extend the capabilities of Velero with custom plugins that can add custom backup and restore functionality.
Refer to this blog from digital ocean by [Hanif Jetha and Jamon Camisso][2] for more information on backup and restore.
[1]: https://velero.io/docs/main/file-system-backup/#limitations
[2]: https://www.digitalocean.com/community/tutorials/how-to-back-up-and-restore-a-kubernetes-cluster-on-digitalocean-using-velero
</details>
# 答案2
**得分**: 0
您的设置缺少基于您分享的错误的PVC或基于PVC。
Velero可以在使用AWS、GCP插件的情况下备份PVC和PV,并在还原时为您创建相应的PVC和PV。
我已经使用Velero成功迁移了Elasticsearch数据库以及PVC,但是您是否在两个集群中使用相同的云提供商或存储类?为什么**hippo-s3-repo**的PVC处于挂起状态?您找到了原因吗?
这是我的文章,不过我使用了插件和存储桶作为存储:https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8
<details>
<summary>英文:</summary>
You setup is missing the PVC or PVC based on the error you have shared.
Velero can take backup of PVC and PV in general snapshot if using AWS, GCP plugin and when you restore it create the PVC and PV for you with that also.
i have migrated the Elasticsearch database with velero along with PVC and worked well in my case, however not are you using the same Cloud provider or **storageclass** in both cluster ? Why **PVC** is pending for **hippo-s3-repo** ? Did you the reason for that ?
Here is my article however i was using the plugin and bucket as storage : https://faun.pub/clone-migrate-data-between-kubernetes-clusters-with-velero-e298196ec3d8
</details>
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论