Initializing meta-data on drbd resource wipes existing data?

huangapple go评论89阅读模式
英文:

Initializing meta-data on drbd resource wipes existing data?

问题

我升级了CentOS 6.4到6.10,1个节点上安装了以下列出的drbd软件包后,停止工作。

[root@node1 etc]# /etc/init.d/drbd start
启动DRBD资源: [
data
未找到合适的元数据 :(
命令 '/sbin/drbdmeta 0 v08 /dev/local/data internal check-resize' 以退出代码 255 终止
drbdadm check-resize data: 以代码 255 退出
d(data) 0: 失败: (119) 未找到有效的元数据签名。

        ==> 使用 'drbdadm create-md res' 初始化元数据区域。 <==


[data] 命令 /sbin/drbdsetup 0 disk /dev/local/data /dev/local/data internal --set-defaults --create-device --on-io-error=detach  失败 - 继续!

s(data) ]。

此外,以下是drbd服务的当前状态:

drbd驱动程序已成功加载;设备状态:
版本: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 由 phil@Build64R6, 2014-11-24 14:51:37 构建
m:res   cs         ro                 ds                 p  挂载    文件系统
0:data  已连接  次要/主要  无磁盘/最新  C

如果我运行 'drbdadm create-md res' 来初始化元数据,它会擦除节点上的磁盘吗?它会影响节点2的数据吗?

还是它只会擦除节点1的磁盘,然后稍后与节点2同步?

尝试安装以下软件包并启动drbd服务:

kmod-drbd83-8.3.16-3.el6.elrepo.x86_64
drbd83-utils-8.3.16-1.el6.elrepo.x86_64


<details>
<summary>英文:</summary>

I upgraded centos 6.4 to 6.10 on 1 node and after that installed drbd package(listed below) stopped working

drbd-km-2.6.32_358.11.1.el6.x86_64-8.4.3-2.el6.x86_64
drbd-km-2.6.32_358.6.2.el6.x86_64-8.4.3-2.el6.x86_64
drbd-utils-8.4.3-2.el6.x86_64

I wasn&#39;t able to find any update for this so I removed it and installed below drbd packages

kmod-drbd83-8.3.16-3.el6.elrepo.x86_64
drbd83-utils-8.3.16-1.el6.elrepo.x86_64

after that it is giving me below error on drbd service start


[root@node1 etc]# /etc/init.d/drbd start
Starting DRBD resources: [
data
no suitable meta data found Initializing meta-data on drbd resource wipes existing data?
Command '/sbin/drbdmeta 0 v08 /dev/local/data internal check-resize' terminated with exit code 255
drbdadm check-resize data: exited with code 255
d(data) 0: Failure: (119) No valid meta-data signature found.

    ==&gt; Use &#39;drbdadm create-md res&#39; to initialize meta-data area. &lt;==

[data] cmd /sbin/drbdsetup 0 disk /dev/local/data /dev/local/data internal --set-defaults --create-device --on-io-error=detach failed - continuing!

s(data) ].


ALso below is the current status of drbd service


drbd driver loaded OK; device status:
version: 8.3.16 (api:88/proto:86-97)
GIT-hash: a798fa7e274428a357657fb52f0ecf40192c1985 build by phil@Build64R6, 2014-11-24 14:51:37
m:res cs ro ds p mounted fstype
0:data Connected Secondary/Primary Diskless/UpToDate C





If I run drbdadm create-md res to initialize meta-data, will it wipe disk on that node and will it affect node2 data?

Or it will wipe only node1 disk and that will be synced later with node2?
 

tried installing below packages and start drbd service

kmod-drbd83-8.3.16-3.el6.elrepo.x86_64
drbd83-utils-8.3.16-1.el6.elrepo.x86_64

</details>


# 答案1
**得分**: 1

你正在尝试从DRBD 8.4.x升级到8.3.x。您无法“降级”DRBD而不重新创建两个节点上的元数据。

如果您在两个节点上运行`drbdadm create-md res`并接受警告消息,您的数据将会保留。只有DRBD元数据将被擦除并重新创建。

仍然建议您备份数据以防万一。您也可以逐个节点“降级”,以确保在降级对等节点之前数据在降级的节点上看起来正常。

<details>
<summary>英文:</summary>

You&#39;re attempting to go from DRBD 8.4.x -&gt; 8.3.x. You cannot &quot;downgrade&quot; DRBD without recreating metadata on both nodes.

If you `drbdadm create-md res` on both nodes and accept the warning messages, your data will be kept. Only the DRBD metadata will be wiped and recreated.

It is still a good idea to take a backup of your data just in case. You can also &quot;downgrade&quot; one node at a time, to make sure the data looks good on the downgraded node before downgrading the peer.

</details>



huangapple
  • 本文由 发表于 2023年5月17日 23:27:09
  • 转载请务必保留本文链接:https://go.coder-hub.com/76273749.html
  • drbd
  • heartbeat
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定