Kubernetes NFS存储类 – 持久数据位于何处?

huangapple go评论114阅读模式
英文:

Kubernetes NFS storage class - where is the persistent data located?

问题

我在Kubernetes中使用PostgreSQL进行测试,并按照[官方说明][1]中的步骤安装了Kubegres,只有一个例外:我定义了自己的NFS存储类和2个持久性卷。

一切都运行正常:

  • 我有2个Pod(主和从),如果我在一个Pod上创建表,表会在第二个Pod上同步。
  • 如果我重新启动Kubernetes集群的所有节点(控制平面和所有工作节点),我仍然可以找到我的数据,所以数据是持久的。

问题是我找不到数据应该在哪里,看起来像一个愚蠢的问题...但不在192.168.88.3/caidoNFS,正如存储类所配置的那样。
我在另一台机器上挂载了192.168.88.3/caidoNFS,但这个文件夹是空的。所以可能出了问题,或者我漏掉了一些重要的东西。

存储类如下:

  1. apiVersion: storage.k8s.io/v1
  2. kind: StorageClass
  3. metadata:
  4. name: caido-nfs
  5. provisioner: caido.ro/caido-nfs
  6. reclaimPolicy: Retain
  7. parameters:
  8. server: 192.168.88.3
  9. path: /caidoNFS
  10. readOnly: "false"

我有两个持久性卷,其大小恰好为Persistent Volume Claims请求的大小(500Mi):

  1. apiVersion: v1
  2. kind: PersistentVolume
  3. metadata:
  4. name: caido-pv1
  5. labels:
  6. type: nfs
  7. spec:
  8. storageClassName: caido-nfs
  9. capacity:
  10. storage: 500Mi
  11. accessModes:
  12. - ReadWriteOnce
  13. hostPath:
  14. path: "/mnt/data1"
  15. apiVersion: v1
  16. kind: PersistentVolume
  17. metadata:
  18. name: caido-pv2
  19. labels:
  20. type: nfs
  21. spec:
  22. storageClassName: caido-nfs
  23. capacity:
  24. storage: 500Mi
  25. accessModes:
  26. - ReadWriteOnce
  27. hostPath:
  28. path: "/mnt/data2"

如果我运行以下命令:

  1. kubectl get pod,statefulset,svc,configmap,pv,pvc -o wide

这是输出:

  1. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  2. pod/mypostgres-1-0 1/1 Running 1 105m 192.168.80.3 kubernetesworker1 <none> <none>
  3. pod/mypostgres-2-0 1/1 Running 1 79m 192.168.80.66 kubernetesworker2 <none> <none>
  4. NAME READY AGE CONTAINERS IMAGES
  5. statefulset.apps/mypostgres-1 1/1 105m mypostgres-1 postgres:14.1
  6. statefulset.apps/mypostgres-2 1/1 79m mypostgres-2 postgres:14.1
  7. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  8. service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d5h <none>
  9. service/mypostgres ClusterIP None <none> 5432/TCP 79m app=mypostgres,replicationRole=primary
  10. service/mypostgres-replica ClusterIP None <none> 5432/TCP 73m app=mypostgres,replicationRole=replica
  11. NAME DATA AGE
  12. configmap/base-kubegres-config 7 105m
  13. configmap/kube-root-ca.crt 1 5d5h
  14. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
  15. persistentvolume/caido-pv1 500Mi RWO Retain Bound default/postgres-db-mypostgres-1-0 caido-nfs 79m Filesystem
  16. persistentvolume/caido-pv2 500Mi RWO Retain Bound default/postgres-db-mypostgres-2-0 caido-nfs 74m Filesystem
  17. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
  18. persistentvolumeclaim/postgres-db-mypostgres-1-0 Bound caido-pv1 500Mi RWO caido-nfs 105m Filesystem
  19. persistentvolumeclaim/postgres-db-mypostgres-2-0 Bound caido-pv2 500Mi RWO caido-nfs 79m Filesystem

使用命令"kubectl describe pod mypostgres-1"描述我的主PostgreSQL Pod如下:

  1. Name: mypostgres-1-0
  2. Namespace: default
  3. Priority: 0
  4. Service Account: default
  5. Node: kubernetesworker1/192.168.88.71
  6. Start Time: Thu, 22 Jun 2023 07:07:17 +0000
  7. Labels: app=mypostgres
  8. controller-revision-hash=mypostgres-1-6f46f6f669
  9. index=1
  10. replicationRole=primary
  11. statefulset.kubernetes.io/pod-name=mypostgres-1-0
  12. Annotations: cni.projectcalico.org/containerID: 910d046ac8b269cd67a48d8334c36a6d8849ba34ca2161403101ba507856e339
  13. cni.projectcalico.org/podIP: 192.168.80.12/32
  14. cni.projectcalico.org/podIPs: 192.168.80.12/32
  15. Status: Running
  16. IP: 192.168.80.12
  17. IPs:
  18. IP: 192.168.80.12
  19. Controlled By: StatefulSet/mypostgres-1
  20. Containers:
  21. mypostgres-1:
  22. Container ID: cri-o://d3196998458acec1797f12279c00e8e58366764e57e0ad3b58f5617a85c7d421
  23. Image: postgres:14.1
  24. Image ID: docker.io/library/postgres@sha256:043c256b5dc621860
  25. <details>
  26. <summary>英文:</summary>
  27. I&#39;m testing with PostgreSQL in Kubernetes and I installed Kubegres as in the [official instructions][1] with a single exception: I defined my own NFS storage class and 2 persistence volumes.
  28. And everything works perfectly:
  29. - I have 2 pods(primary and secondary), if I create a table on a pod the table is synchronized on the second pod.
  30. - If I restart all the Kubernetes cluster nodes(the control-plane and all workers) I can still find my data so I have persistence.
  31. The problem is that I can&#39;t find the data where is supposed to be, seems like a stupid question... but is not in 192.168.88.3/caidoNFS as the storage class is configured.
  32. I mounted 192.168.88.3/caidoNFS on another machine and this folder is empty. So something is wrong or I&#39;m missing something essential.
  33. The storage class is:
  34. apiVersion: storage.k8s.io/v1
  35. kind: StorageClass
  36. metadata:
  37. name: caido-nfs
  38. provisioner: caido.ro/caido-nfs
  39. reclaimPolicy: Retain
  40. parameters:
  41. server: 192.168.88.3
  42. path: /caidoNFS
  43. readOnly: &quot;false&quot;
  44. I have 2 Persistence Volumes that have 500Mi - exactly the size requested by the Persistent Volume Claims:
  45. apiVersion: v1
  46. kind: PersistentVolume
  47. metadata:
  48. name: caido-pv1
  49. labels:
  50. type: nfs
  51. spec:
  52. storageClassName: caido-nfs
  53. capacity:
  54. storage: 500Mi
  55. accessModes:
  56. - ReadWriteOnce
  57. hostPath:
  58. path: &quot;/mnt/data1&quot;
  59. apiVersion: v1
  60. kind: PersistentVolume
  61. metadata:
  62. name: caido-pv2
  63. labels:
  64. type: nfs
  65. spec:
  66. storageClassName: caido-nfs
  67. capacity:
  68. storage: 500Mi
  69. accessModes:
  70. - ReadWriteOnce
  71. hostPath:
  72. path: &quot;/mnt/data2&quot;
  73. If I run **&quot;kubectl get pod,statefulset,svc,configmap,pv,pvc -o wide&quot;** this is the output:
  74. NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
  75. pod/mypostgres-1-0 1/1 Running 1 105m 192.168.80.3 kubernetesworker1 &lt;none&gt; &lt;none&gt;
  76. pod/mypostgres-2-0 1/1 Running 1 79m 192.168.80.66 kubernetesworker2 &lt;none&gt; &lt;none&gt;
  77. NAME READY AGE CONTAINERS IMAGES
  78. statefulset.apps/mypostgres-1 1/1 105m mypostgres-1 postgres:14.1
  79. statefulset.apps/mypostgres-2 1/1 79m mypostgres-2 postgres:14.1
  80. NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
  81. service/kubernetes ClusterIP 10.96.0.1 &lt;none&gt; 443/TCP 5d5h &lt;none&gt;
  82. service/mypostgres ClusterIP None &lt;none&gt; 5432/TCP 79m app=mypostgres,replicationRole=primary
  83. service/mypostgres-replica ClusterIP None &lt;none&gt; 5432/TCP 73m app=mypostgres,replicationRole=replica
  84. NAME DATA AGE
  85. configmap/base-kubegres-config 7 105m
  86. configmap/kube-root-ca.crt 1 5d5h
  87. NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE VOLUMEMODE
  88. persistentvolume/caido-pv1 500Mi RWO Retain Bound default/postgres-db-mypostgres-1-0 caido-nfs 79m Filesystem
  89. persistentvolume/caido-pv2 500Mi RWO Retain Bound default/postgres-db-mypostgres-2-0 caido-nfs 74m Filesystem
  90. NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE VOLUMEMODE
  91. persistentvolumeclaim/postgres-db-mypostgres-1-0 Bound caido-pv1 500Mi RWO caido-nfs 105m Filesystem
  92. persistentvolumeclaim/postgres-db-mypostgres-2-0 Bound caido-pv2 500Mi RWO caido-nfs 79m Filesystem
  93. The description of my master PostgreSQL pod with the command &quot;**kubectl describe pod mypostgres-1**&quot; is:
  94. Name: mypostgres-1-0
  95. Namespace: default
  96. Priority: 0
  97. Service Account: default
  98. Node: kubernetesworker1/192.168.88.71
  99. Start Time: Thu, 22 Jun 2023 07:07:17 +0000
  100. Labels: app=mypostgres
  101. controller-revision-hash=mypostgres-1-6f46f6f669
  102. index=1
  103. replicationRole=primary
  104. statefulset.kubernetes.io/pod-name=mypostgres-1-0
  105. Annotations: cni.projectcalico.org/containerID: 910d046ac8b269cd67a48d8334c36a6d8849ba34ca2161403101ba507856e339
  106. cni.projectcalico.org/podIP: 192.168.80.12/32
  107. cni.projectcalico.org/podIPs: 192.168.80.12/32
  108. Status: Running
  109. IP: 192.168.80.12
  110. IPs:
  111. IP: 192.168.80.12
  112. Controlled By: StatefulSet/mypostgres-1
  113. Containers:
  114. mypostgres-1:
  115. Container ID: cri-o://d3196998458acec1797f12279c00e8e58366764e57e0ad3b58f5617a85c7d421
  116. Image: postgres:14.1
  117. Image ID: docker.io/library/postgres@sha256:043c256b5dc621860539d8036d906eaaef1bdfa69a0344b4509b483205f14e63
  118. Port: 5432/TCP
  119. Host Port: 0/TCP
  120. Args:
  121. -c
  122. config_file=/etc/postgres.conf
  123. -c
  124. hba_file=/etc/pg_hba.conf
  125. State: Running
  126. Started: Thu, 22 Jun 2023 07:07:17 +0000
  127. Ready: True
  128. Restart Count: 0
  129. Liveness: exec [sh -c exec pg_isready -U postgres -h $POD_IP] delay=60s timeout=15s period=20s #success=1 #failure=10
  130. Readiness: exec [sh -c exec pg_isready -U postgres -h $POD_IP] delay=5s timeout=3s period=10s #success=1 #failure=3
  131. Environment:
  132. POD_IP: (v1:status.podIP)
  133. PGDATA: /var/lib/postgresql/data/pgdata
  134. POSTGRES_PASSWORD: &lt;set to the key &#39;superUserPassword&#39; in secret &#39;mypostgres-secret&#39;&gt; Optional: false
  135. POSTGRES_REPLICATION_PASSWORD: &lt;set to the key &#39;replicationUserPassword&#39; in secret &#39;mypostgres-secret&#39;&gt; Optional: false
  136. Mounts:
  137. /docker-entrypoint-initdb.d/primary_create_replication_role.sh from base-config (rw,path=&quot;primary_create_replication_role.sh&quot;)
  138. /docker-entrypoint-initdb.d/primary_init_script.sh from base-config (rw,path=&quot;primary_init_script.sh&quot;)
  139. /etc/pg_hba.conf from base-config (rw,path=&quot;pg_hba.conf&quot;)
  140. /etc/postgres.conf from base-config (rw,path=&quot;postgres.conf&quot;)
  141. /var/lib/postgresql/data from postgres-db (rw)
  142. /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-tlgnh (ro)
  143. Conditions:
  144. Type Status
  145. Initialized True
  146. Ready True
  147. ContainersReady True
  148. PodScheduled True
  149. Volumes:
  150. postgres-db:
  151. Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
  152. ClaimName: postgres-db-mypostgres-1-0
  153. ReadOnly: false
  154. base-config:
  155. Type: ConfigMap (a volume populated by a ConfigMap)
  156. Name: base-kubegres-config
  157. Optional: false
  158. kube-api-access-tlgnh:
  159. Type: Projected (a volume that contains injected data from multiple sources)
  160. TokenExpirationSeconds: 3607
  161. ConfigMapName: kube-root-ca.crt
  162. ConfigMapOptional: &lt;nil&gt;
  163. DownwardAPI: true
  164. QoS Class: BestEffort
  165. Node-Selectors: &lt;none&gt;
  166. Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
  167. node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
  168. Events:
  169. Type Reason Age From Message
  170. ---- ------ ---- ---- -------
  171. Warning FailedScheduling 56s default-scheduler 0/3 nodes are available: pod has unbound immediate PersistentVolumeClaims. preemption: 0/3 nodes are available: 3 No preemption victims found for incoming pod..
  172. Normal Scheduled 54s default-scheduler Successfully assigned default/mypostgres-1-0 to kubernetesworker1
  173. Normal Pulled 54s kubelet Container image &quot;postgres:14.1&quot; already present on machine
  174. Normal Created 54s kubelet Created container mypostgres-1
  175. Normal Started 54s kubelet Started container mypostgres-1
  176. The message &quot;pod has unbound immediate PersistentVolumeClaims&quot; appears even if I have a Persistent Volume created and bound. Maybe this error appears because of some initial timeout? This is the output of &quot;**kubectl describe pvc postgres-db-mypostgres-1-0**&quot;:
  177. Name: postgres-db-mypostgres-1-0
  178. Namespace: default
  179. StorageClass: caido-nfs
  180. Status: Bound
  181. Volume: caido-pv1
  182. Labels: app=mypostgres
  183. index=1
  184. Annotations: pv.kubernetes.io/bind-completed: yes
  185. pv.kubernetes.io/bound-by-controller: yes
  186. Finalizers: [kubernetes.io/pvc-protection]
  187. Capacity: 500Mi
  188. Access Modes: RWO
  189. VolumeMode: Filesystem
  190. Used By: mypostgres-1-0
  191. Events: &lt;none&gt;
  192. To summarize there are 2 questions:
  193. - where is the persistent data located? I accessed \\192.168.88.3\caidoNFS on a Windows and I mounted it in a Linux fstab but the folder is empty.
  194. //192.168.88.3/caidoNFS /mnt/nfs cifs username=admin,dom=mydomain,password=...# 0 0
  195. - why does the message &quot;pod has unbound immediate PersistentVolumeClaims&quot; appears if the PVC is bound?
  196. [1]: https://www.kubegres.io/doc/getting-started.html
  197. </details>
  198. # 答案1
  199. **得分**: 1
  200. 我找到了我的数据,它位于每个工作节点上的/mnt/data1和/mnt/data2。我猜想NFS由于某种未知原因失败了,系统在每个工作节点上创建了本地存储。
  201. 我可能没有按照应该的方式配置NFS,我会再多了解一下这个问题。
  202. <details>
  203. <summary>英文:</summary>
  204. I found my data, it was in /mnt/data1 and /mnt/data2 on every worker node. I guess the NFS failed for some unknown reason and the system created a local storage on every worker node.
  205. I probably didn&#39;t configure the NFS as I should, I&#39;ll read more about this.
  206. </details>

huangapple
  • 本文由 发表于 2023年6月22日 15:29:44
  • 转载请务必保留本文链接:https://go.coder-hub.com/76529509.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定