在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

huangapple go评论45阅读模式
英文:

Error when configuring Shared Store for High Availability for Red Hat AMQ (ActiveMQ Artemis 2.28.0)

问题

我正在尝试在两台不同的虚拟机上配置两个代理。我想要实现如下图片所示的效果。

  • VM IP 地址:10.0.2.15
  • VM2 IP 地址:10.0.2.4

我已经在第一台机器上创建了一个NFS驱动器:
在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

根据 Red Hat 的文档,要实现具有存储高可用性,您需要配置一个集群:
在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

我将提供我的两个代理的配置如下:

Broker 1 (主代理):

<name>Broker Master</name>
<persistence-enabled>true</persistence-enabled>
<!--
 推荐将此值保持为1,以最大化关于重发记录的存储数量。
           但是,如果您必须保留个别重发的状态,可以增加此值或将其设置为-1(无限制)。
-->
<max-redelivery-records>1</max-redelivery-records>
<!--
 这可以是 ASYNCIO、MAPPED、NIO
           ASYNCIO:Linux Libaio
           MAPPED:mmap 文件
           NIO:普通 Java 文件
       
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/srv/nfs_share/data/paging</paging-directory>
<bindings-directory>/srv/nfs_share/data/bindings</bindings-directory>
<journal-directory>/srv/nfs_share/data/journal</journal-directory>
<large-messages-directory>/srv/nfs_share/data/large-messages</large-messages-directory>

...

<connectors>
<!--
 通过集群连接和通知来宣布的连接器
-->
<connector name="netty-connector">tcp://10.0.2.15:61616</connector>
<connector name="broker2-connector">tcp://10.0.2.4:61616</connector>
</connectors>

...

<acceptor name="artemis">
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false
</acceptor>
<!--
 AMQP 接收器。 监听默认的 AMQP 端口以进行 AMQP 流量。
-->
<acceptor name="amqp">
tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true
</acceptor>
<!-- STOMP 接收器. -->
<acceptor name="stomp">
tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
</acceptor>
<!--
 HornetQ 兼容接收器。 启用传统 HornetQ 客户端的 HornetQ 核心和 STOMP。
-->
<acceptor name="hornetq">
tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true
</acceptor>
<!-- MQTT 接收器 -->
<acceptor name="mqtt">
tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true
</acceptor>
</acceptors>
<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<local-bind-address>10.0.2.15</local-bind-address>
<local-bind-port>5432</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>2000</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<local-bind-address>10.0.2.15</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>

Broker 2 (从代理):

<name>Broker Slave</name>
<persistence-enabled>true</persistence-enabled>
<!--
 推荐将此值保持为1,以最大化关于重发记录的存储数量。
           但是,如果您必须保留个别重发的状态,可以增加此值或将其设置为-1(无限制)。
-->
<max-redelivery-records>1</max-redelivery-records>
<!--
 这可以是 ASYNCIO、MAPPED、NIO
           ASYNCIO:Linux Libaio
           MAPPED:mmap 文件
           NIO:普通 Java 文件
       
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/tmp/nfs/data/paging</paging-directory>
<bindings-directory>/tmp/nfs/data/bindings</bindings-directory>
<journal-directory>/tmp/nfs/data/journal</journal-directory>
<large-messages-directory>/tmp/nfs/data/large-messages</large-messages-directory>

...

<connectors>
<!--
 通过集群连接和通知来宣布的连接器
-->
<connector name="netty-connector">tcp://10.0.2.4:61616</connector>
<connector name="broker1-connector">tcp://10.0.2.15:61616</connector>
</connectors>

...

<acceptor name="artemis">
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=

<details>
<summary>英文:</summary>

I am trying to configure two brokers on two different virtual machines. I would like to get an effect like the picture below.

[![Shared Store for High Availability Diagram Picture][1]][1]

 - VM IP Address: `10.0.2.15`
 - VM2 IP Address: `10.0.2.4`

I currently have an NFS drive already created on the first machine:
[![NFS Share Drive Proof Picture][2]][2]

In the documentation from Red Hat I read that to get high availability with storage you need to configure a cluster:
[![Red Hat HA Prerequisites Picture][3]][3]

I will make the configurations for my two brokers available below:

Broker 1 (Master):

```xml
&lt;name&gt;Broker Master&lt;/name&gt;
&lt;persistence-enabled&gt;true&lt;/persistence-enabled&gt;
&lt;!--
 It is recommended to keep this value as 1, maximizing the number of records stored about redeliveries.
           However if you must preserve state of individual redeliveries, you may increase this value or set it to -1 (infinite). 
--&gt;
&lt;max-redelivery-records&gt;1&lt;/max-redelivery-records&gt;
&lt;!--
 this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       
--&gt;
&lt;journal-type&gt;ASYNCIO&lt;/journal-type&gt;
&lt;paging-directory&gt;/srv/nfs_share/data/paging&lt;/paging-directory&gt;
&lt;bindings-directory&gt;/srv/nfs_share/data/bindings&lt;/bindings-directory&gt;
&lt;journal-directory&gt;/srv/nfs_share/data/journal&lt;/journal-directory&gt;
&lt;large-messages-directory&gt;/srv/nfs_share/data/large-messages&lt;/large-messages-directory&gt;

...

&lt;connectors&gt;
&lt;!--
 Connector used to be announced through cluster connections and notifications 
--&gt;
&lt;connector name=&quot;netty-connector&quot;&gt;tcp://10.0.2.15:61616&lt;/connector&gt;
&lt;connector name=&quot;broker2-connector&quot;&gt;tcp://10.0.2.4:61616&lt;/connector&gt;
&lt;/connectors&gt;

...

&lt;acceptor name=&quot;artemis&quot;&gt;
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false
&lt;/acceptor&gt;
&lt;!--
 AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.
--&gt;
&lt;acceptor name=&quot;amqp&quot;&gt;
tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true
&lt;/acceptor&gt;
&lt;!-- STOMP Acceptor. --&gt;
&lt;acceptor name=&quot;stomp&quot;&gt;
tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
&lt;/acceptor&gt;
&lt;!--
 HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. 
--&gt;
&lt;acceptor name=&quot;hornetq&quot;&gt;
tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true
&lt;/acceptor&gt;
&lt;!-- MQTT Acceptor --&gt;
&lt;acceptor name=&quot;mqtt&quot;&gt;
tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true
&lt;/acceptor&gt;
&lt;/acceptors&gt;
&lt;cluster-user&gt;admin&lt;/cluster-user&gt;
&lt;cluster-password&gt;admin&lt;/cluster-password&gt;
&lt;broadcast-groups&gt;
&lt;broadcast-group name=&quot;my-broadcast-group&quot;&gt;
&lt;local-bind-address&gt;10.0.2.15&lt;/local-bind-address&gt;
&lt;local-bind-port&gt;5432&lt;/local-bind-port&gt;
&lt;group-address&gt;231.7.7.7&lt;/group-address&gt;
&lt;group-port&gt;9876&lt;/group-port&gt;
&lt;broadcast-period&gt;2000&lt;/broadcast-period&gt;
&lt;connector-ref&gt;netty-connector&lt;/connector-ref&gt;
&lt;/broadcast-group&gt;
&lt;/broadcast-groups&gt;
&lt;discovery-groups&gt;
&lt;discovery-group name=&quot;my-discovery-group&quot;&gt;
&lt;local-bind-address&gt;10.0.2.15&lt;/local-bind-address&gt;
&lt;group-address&gt;231.7.7.7&lt;/group-address&gt;
&lt;group-port&gt;9876&lt;/group-port&gt;
&lt;refresh-timeout&gt;10000&lt;/refresh-timeout&gt;
&lt;/discovery-group&gt;
&lt;/discovery-groups&gt;
&lt;cluster-connections&gt;
&lt;cluster-connection name=&quot;my-cluster&quot;&gt;
&lt;connector-ref&gt;netty-connector&lt;/connector-ref&gt;
&lt;static-connectors&gt;
&lt;connector-ref&gt;broker2-connector&lt;/connector-ref&gt;
&lt;/static-connectors&gt;
&lt;/cluster-connection&gt;
&lt;/cluster-connections&gt;
&lt;ha-policy&gt;
&lt;shared-store&gt;
&lt;master&gt;
&lt;failover-on-shutdown&gt;true&lt;/failover-on-shutdown&gt;
&lt;/master&gt;
&lt;/shared-store&gt;
&lt;/ha-policy&gt;

Broker 2 (Slave):

&lt;name&gt;Broker Slave&lt;/name&gt;
&lt;persistence-enabled&gt;true&lt;/persistence-enabled&gt;
&lt;!--
 It is recommended to keep this value as 1, maximizing the number of records stored about redeliveries.
           However if you must preserve state of individual redeliveries, you may increase this value or set it to -1 (infinite). 
--&gt;
&lt;max-redelivery-records&gt;1&lt;/max-redelivery-records&gt;
&lt;!--
 this could be ASYNCIO, MAPPED, NIO
           ASYNCIO: Linux Libaio
           MAPPED: mmap files
           NIO: Plain Java Files
       
--&gt;
&lt;journal-type&gt;ASYNCIO&lt;/journal-type&gt;
&lt;paging-directory&gt;/tmp/nfs/data/paging&lt;/paging-directory&gt;
&lt;bindings-directory&gt;/tmp/nfs/data/bindings&lt;/bindings-directory&gt;
&lt;journal-directory&gt;/tmp/nfs/data/journal&lt;/journal-directory&gt;
&lt;large-messages-directory&gt;/tmp/nfs/data/large-messages&lt;/large-messages-directory&gt;

...

&lt;connectors&gt;
&lt;!--
 Connector used to be announced through cluster connections and notifications 
--&gt;
&lt;connector name=&quot;netty-connector&quot;&gt;tcp://10.0.2.4:61616&lt;/connector&gt;
&lt;connector name=&quot;broker1-connector&quot;&gt;tcp://10.0.2.15:61616&lt;/connector&gt;
&lt;/connectors&gt;

...

&lt;acceptor name=&quot;artemis&quot;&gt;
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false
&lt;/acceptor&gt;
&lt;!--
 AMQP Acceptor.  Listens on default AMQP port for AMQP traffic.
--&gt;
&lt;acceptor name=&quot;amqp&quot;&gt;
tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true
&lt;/acceptor&gt;
&lt;!-- STOMP Acceptor. --&gt;
&lt;acceptor name=&quot;stomp&quot;&gt;
tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
&lt;/acceptor&gt;
&lt;!--
 HornetQ Compatibility Acceptor.  Enables HornetQ Core and STOMP for legacy HornetQ clients. 
--&gt;
&lt;acceptor name=&quot;hornetq&quot;&gt;
tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true
&lt;/acceptor&gt;
&lt;!-- MQTT Acceptor --&gt;
&lt;acceptor name=&quot;mqtt&quot;&gt;
tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true
&lt;/acceptor&gt;
&lt;/acceptors&gt;
&lt;cluster-user&gt;admin&lt;/cluster-user&gt;
&lt;cluster-password&gt;admin&lt;/cluster-password&gt;
&lt;broadcast-groups&gt;
&lt;broadcast-group name=&quot;my-broadcast-group&quot;&gt;
&lt;local-bind-address&gt;10.0.2.4&lt;/local-bind-address&gt;
&lt;local-bind-port&gt;5432&lt;/local-bind-port&gt;
&lt;group-address&gt;231.7.7.7&lt;/group-address&gt;
&lt;group-port&gt;9876&lt;/group-port&gt;
&lt;broadcast-period&gt;2000&lt;/broadcast-period&gt;
&lt;connector-ref&gt;netty-connector&lt;/connector-ref&gt;
&lt;/broadcast-group&gt;
&lt;/broadcast-groups&gt;
&lt;discovery-groups&gt;
&lt;discovery-group name=&quot;my-discovery-group&quot;&gt;
&lt;local-bind-address&gt;10.0.2.4&lt;/local-bind-address&gt;
&lt;group-address&gt;231.7.7.7&lt;/group-address&gt;
&lt;group-port&gt;9876&lt;/group-port&gt;
&lt;refresh-timeout&gt;10000&lt;/refresh-timeout&gt;
&lt;/discovery-group&gt;
&lt;/discovery-groups&gt;
&lt;cluster-connections&gt;
&lt;cluster-connection name=&quot;my-cluster&quot;&gt;
&lt;connector-ref&gt;netty-connector&lt;/connector-ref&gt;
&lt;static-connectors&gt;
&lt;connector-ref&gt;broker1-connector&lt;/connector-ref&gt;
&lt;/static-connectors&gt;
&lt;/cluster-connection&gt;
&lt;/cluster-connections&gt;
&lt;ha-policy&gt;
&lt;shared-store&gt;
&lt;slave&gt;
&lt;allow-failback&gt;true&lt;/allow-failback&gt;
&lt;failover-on-shutdown&gt;true&lt;/failover-on-shutdown&gt;
&lt;/slave&gt;
&lt;/shared-store&gt;
&lt;/ha-policy&gt;

It looks like the backup (slave) server is working:
在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

PROBLEM QUESTION

Shared storage I can see that it works because the files on machine 1 collect, and on the other it does not. When I try to disable the master (VM1), the slave (VM2) only sees that the connection has been broken, but nothing further happens.

在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

It looks like VM2 is aware of VM1's existence, but yet they are not connected?

After a few minutes have passed, such an error also appears on the broker master (VM1):
在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

Console configuration screenshot:
在配置Red Hat AMQ(ActiveMQ Artemis 2.28.0)的高可用性共享存储时出现错误。

Could you please help me how to resolve this issue?

I tried various settings of discovering group, broadcasting group and cluster connection but none of it helped

答案1

得分: 0

确保 NFS 挂载使用了来自 Red Hat AMQ 文档 的配置建议。

请记住,cluster-connection 仅用于直接在经纪人之间建立连接,以便在发生故障时,主经纪人可以通知客户端应该连接到哪里。然而,在共享日志上的 文件锁 确保在任何时候只有一个经纪人处于活动状态,而锁定行为取决于 NFS 配置。

最后,你的配置有点混乱,因为你的主经纪人和备用经纪人都定义了 broadcast-groupdiscovery-group,但你的 cluster-connection 使用了 static-connectors,这意味着 broadcast-groupdiscovery-group 并未被使用。你应该删除多余的 broadcast-groupdiscovery-group 配置,或者实际上在你的 cluster-connection 中使用 discovery-group,而不是使用 static-connectors。简而言之,由于你使用了 static-connectors,你不需要使用 discovery-groupbroadcast-group。修复这个问题不会解决你的问题,但在诊断问题时,这将使你的配置变得不那么混乱,这是很重要的。

英文:

You need to ensure the NFS mount is using the configuration recommendations from the Red Hat AMQ documentation.

Keep in mind that the cluster-connection is just there for connectivity directly between the brokers so that the primary can inform clients about where to connect in case of a failure. However, the file lock on the shared journal ensures that only one broker is active at any point in time, and the locking behavior depends on the NFS configuration.

Lastly, your configuration is a little confusing because both your primary and your backup have broadcast-group and discovery-group defined, but your cluster-connection is using static-connectors which means the broadcast-group and discovery-group aren't being used. You should either remove the superfluous broadcast-group and discovery-group configurations or actually use the discovery-group via discovery-group-ref in your cluster-connection in lieu of static-connectors. In short, since you use static-connectors you don't need to use discovery-group and broadcast-group. Fixing this won't fix your problem, but it will make your configuration less confusing which is important when diagnosing issues.

huangapple
  • 本文由 发表于 2023年8月4日 03:20:23
  • 转载请务必保留本文链接:https://go.coder-hub.com/76831049.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定