英文:
Error when configuring Shared Store for High Availability for Red Hat AMQ (ActiveMQ Artemis 2.28.0)
问题
我正在尝试在两台不同的虚拟机上配置两个代理。我想要实现如下图片所示的效果。
- VM IP 地址:
10.0.2.15
- VM2 IP 地址:
10.0.2.4
根据 Red Hat 的文档,要实现具有存储高可用性,您需要配置一个集群:
我将提供我的两个代理的配置如下:
Broker 1 (主代理):
<name>Broker Master</name>
<persistence-enabled>true</persistence-enabled>
<!--
推荐将此值保持为1,以最大化关于重发记录的存储数量。
但是,如果您必须保留个别重发的状态,可以增加此值或将其设置为-1(无限制)。
-->
<max-redelivery-records>1</max-redelivery-records>
<!--
这可以是 ASYNCIO、MAPPED、NIO
ASYNCIO:Linux Libaio
MAPPED:mmap 文件
NIO:普通 Java 文件
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/srv/nfs_share/data/paging</paging-directory>
<bindings-directory>/srv/nfs_share/data/bindings</bindings-directory>
<journal-directory>/srv/nfs_share/data/journal</journal-directory>
<large-messages-directory>/srv/nfs_share/data/large-messages</large-messages-directory>
...
<connectors>
<!--
通过集群连接和通知来宣布的连接器
-->
<connector name="netty-connector">tcp://10.0.2.15:61616</connector>
<connector name="broker2-connector">tcp://10.0.2.4:61616</connector>
</connectors>
...
<acceptor name="artemis">
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false
</acceptor>
<!--
AMQP 接收器。 监听默认的 AMQP 端口以进行 AMQP 流量。
-->
<acceptor name="amqp">
tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true
</acceptor>
<!-- STOMP 接收器. -->
<acceptor name="stomp">
tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
</acceptor>
<!--
HornetQ 兼容接收器。 启用传统 HornetQ 客户端的 HornetQ 核心和 STOMP。
-->
<acceptor name="hornetq">
tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true
</acceptor>
<!-- MQTT 接收器 -->
<acceptor name="mqtt">
tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true
</acceptor>
</acceptors>
<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<local-bind-address>10.0.2.15</local-bind-address>
<local-bind-port>5432</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>2000</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<local-bind-address>10.0.2.15</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
Broker 2 (从代理):
<name>Broker Slave</name>
<persistence-enabled>true</persistence-enabled>
<!--
推荐将此值保持为1,以最大化关于重发记录的存储数量。
但是,如果您必须保留个别重发的状态,可以增加此值或将其设置为-1(无限制)。
-->
<max-redelivery-records>1</max-redelivery-records>
<!--
这可以是 ASYNCIO、MAPPED、NIO
ASYNCIO:Linux Libaio
MAPPED:mmap 文件
NIO:普通 Java 文件
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/tmp/nfs/data/paging</paging-directory>
<bindings-directory>/tmp/nfs/data/bindings</bindings-directory>
<journal-directory>/tmp/nfs/data/journal</journal-directory>
<large-messages-directory>/tmp/nfs/data/large-messages</large-messages-directory>
...
<connectors>
<!--
通过集群连接和通知来宣布的连接器
-->
<connector name="netty-connector">tcp://10.0.2.4:61616</connector>
<connector name="broker1-connector">tcp://10.0.2.15:61616</connector>
</connectors>
...
<acceptor name="artemis">
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=
<details>
<summary>英文:</summary>
I am trying to configure two brokers on two different virtual machines. I would like to get an effect like the picture below.
[![Shared Store for High Availability Diagram Picture][1]][1]
- VM IP Address: `10.0.2.15`
- VM2 IP Address: `10.0.2.4`
I currently have an NFS drive already created on the first machine:
[![NFS Share Drive Proof Picture][2]][2]
In the documentation from Red Hat I read that to get high availability with storage you need to configure a cluster:
[![Red Hat HA Prerequisites Picture][3]][3]
I will make the configurations for my two brokers available below:
Broker 1 (Master):
```xml
<name>Broker Master</name>
<persistence-enabled>true</persistence-enabled>
<!--
It is recommended to keep this value as 1, maximizing the number of records stored about redeliveries.
However if you must preserve state of individual redeliveries, you may increase this value or set it to -1 (infinite).
-->
<max-redelivery-records>1</max-redelivery-records>
<!--
this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/srv/nfs_share/data/paging</paging-directory>
<bindings-directory>/srv/nfs_share/data/bindings</bindings-directory>
<journal-directory>/srv/nfs_share/data/journal</journal-directory>
<large-messages-directory>/srv/nfs_share/data/large-messages</large-messages-directory>
...
<connectors>
<!--
Connector used to be announced through cluster connections and notifications
-->
<connector name="netty-connector">tcp://10.0.2.15:61616</connector>
<connector name="broker2-connector">tcp://10.0.2.4:61616</connector>
</connectors>
...
<acceptor name="artemis">
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false
</acceptor>
<!--
AMQP Acceptor. Listens on default AMQP port for AMQP traffic.
-->
<acceptor name="amqp">
tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true
</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">
tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
</acceptor>
<!--
HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients.
-->
<acceptor name="hornetq">
tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true
</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">
tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true
</acceptor>
</acceptors>
<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<local-bind-address>10.0.2.15</local-bind-address>
<local-bind-port>5432</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>2000</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<local-bind-address>10.0.2.15</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker2-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<master>
<failover-on-shutdown>true</failover-on-shutdown>
</master>
</shared-store>
</ha-policy>
Broker 2 (Slave):
<name>Broker Slave</name>
<persistence-enabled>true</persistence-enabled>
<!--
It is recommended to keep this value as 1, maximizing the number of records stored about redeliveries.
However if you must preserve state of individual redeliveries, you may increase this value or set it to -1 (infinite).
-->
<max-redelivery-records>1</max-redelivery-records>
<!--
this could be ASYNCIO, MAPPED, NIO
ASYNCIO: Linux Libaio
MAPPED: mmap files
NIO: Plain Java Files
-->
<journal-type>ASYNCIO</journal-type>
<paging-directory>/tmp/nfs/data/paging</paging-directory>
<bindings-directory>/tmp/nfs/data/bindings</bindings-directory>
<journal-directory>/tmp/nfs/data/journal</journal-directory>
<large-messages-directory>/tmp/nfs/data/large-messages</large-messages-directory>
...
<connectors>
<!--
Connector used to be announced through cluster connections and notifications
-->
<connector name="netty-connector">tcp://10.0.2.4:61616</connector>
<connector name="broker1-connector">tcp://10.0.2.15:61616</connector>
</connectors>
...
<acceptor name="artemis">
tcp://0.0.0.0:61616?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;amqpMinLargeMessageSize=102400;protocols=CORE,AMQP,STOMP,HORNETQ,MQTT,OPENWIRE;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpDuplicateDetection=true;supportAdvisory=false;suppressInternalManagementObjects=false
</acceptor>
<!--
AMQP Acceptor. Listens on default AMQP port for AMQP traffic.
-->
<acceptor name="amqp">
tcp://0.0.0.0:5672?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=AMQP;useEpoll=true;amqpCredits=1000;amqpLowCredits=300;amqpMinLargeMessageSize=102400;amqpDuplicateDetection=true
</acceptor>
<!-- STOMP Acceptor. -->
<acceptor name="stomp">
tcp://0.0.0.0:61613?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=STOMP;useEpoll=true
</acceptor>
<!--
HornetQ Compatibility Acceptor. Enables HornetQ Core and STOMP for legacy HornetQ clients.
-->
<acceptor name="hornetq">
tcp://0.0.0.0:5445?anycastPrefix=jms.queue.;multicastPrefix=jms.topic.;protocols=HORNETQ,STOMP;useEpoll=true
</acceptor>
<!-- MQTT Acceptor -->
<acceptor name="mqtt">
tcp://0.0.0.0:1883?tcpSendBufferSize=1048576;tcpReceiveBufferSize=1048576;protocols=MQTT;useEpoll=true
</acceptor>
</acceptors>
<cluster-user>admin</cluster-user>
<cluster-password>admin</cluster-password>
<broadcast-groups>
<broadcast-group name="my-broadcast-group">
<local-bind-address>10.0.2.4</local-bind-address>
<local-bind-port>5432</local-bind-port>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<broadcast-period>2000</broadcast-period>
<connector-ref>netty-connector</connector-ref>
</broadcast-group>
</broadcast-groups>
<discovery-groups>
<discovery-group name="my-discovery-group">
<local-bind-address>10.0.2.4</local-bind-address>
<group-address>231.7.7.7</group-address>
<group-port>9876</group-port>
<refresh-timeout>10000</refresh-timeout>
</discovery-group>
</discovery-groups>
<cluster-connections>
<cluster-connection name="my-cluster">
<connector-ref>netty-connector</connector-ref>
<static-connectors>
<connector-ref>broker1-connector</connector-ref>
</static-connectors>
</cluster-connection>
</cluster-connections>
<ha-policy>
<shared-store>
<slave>
<allow-failback>true</allow-failback>
<failover-on-shutdown>true</failover-on-shutdown>
</slave>
</shared-store>
</ha-policy>
It looks like the backup (slave) server is working:
PROBLEM QUESTION
Shared storage I can see that it works because the files on machine 1 collect, and on the other it does not. When I try to disable the master (VM1), the slave (VM2) only sees that the connection has been broken, but nothing further happens.
It looks like VM2 is aware of VM1's existence, but yet they are not connected?
After a few minutes have passed, such an error also appears on the broker master (VM1):
Console configuration screenshot:
Could you please help me how to resolve this issue?
I tried various settings of discovering group, broadcasting group and cluster connection but none of it helped
答案1
得分: 0
确保 NFS 挂载使用了来自 Red Hat AMQ 文档 的配置建议。
请记住,cluster-connection
仅用于直接在经纪人之间建立连接,以便在发生故障时,主经纪人可以通知客户端应该连接到哪里。然而,在共享日志上的 文件锁 确保在任何时候只有一个经纪人处于活动状态,而锁定行为取决于 NFS 配置。
最后,你的配置有点混乱,因为你的主经纪人和备用经纪人都定义了 broadcast-group
和 discovery-group
,但你的 cluster-connection
使用了 static-connectors
,这意味着 broadcast-group
和 discovery-group
并未被使用。你应该删除多余的 broadcast-group
和 discovery-group
配置,或者实际上在你的 cluster-connection
中使用 discovery-group
,而不是使用 static-connectors
。简而言之,由于你使用了 static-connectors
,你不需要使用 discovery-group
和 broadcast-group
。修复这个问题不会解决你的问题,但在诊断问题时,这将使你的配置变得不那么混乱,这是很重要的。
英文:
You need to ensure the NFS mount is using the configuration recommendations from the Red Hat AMQ documentation.
Keep in mind that the cluster-connection
is just there for connectivity directly between the brokers so that the primary can inform clients about where to connect in case of a failure. However, the file lock on the shared journal ensures that only one broker is active at any point in time, and the locking behavior depends on the NFS configuration.
Lastly, your configuration is a little confusing because both your primary and your backup have broadcast-group
and discovery-group
defined, but your cluster-connection
is using static-connectors
which means the broadcast-group
and discovery-group
aren't being used. You should either remove the superfluous broadcast-group
and discovery-group
configurations or actually use the discovery-group
via discovery-group-ref
in your cluster-connection
in lieu of static-connectors
. In short, since you use static-connectors
you don't need to use discovery-group
and broadcast-group
. Fixing this won't fix your problem, but it will make your configuration less confusing which is important when diagnosing issues.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论