WildFly集群具有多个JMS生产者和单个消费者

huangapple go评论133阅读模式
英文:

WidlFly cluster with multiple JMS producers and a single consumer

问题

我正在在Kubernetes集群中的WildFly 25上运行一个集群应用程序。用户通过入口分布在整个集群中。目前,集群中的所有WildFly节点共享相同的EAR并具有相同的配置,基于standalone-full.xml

我想要扩展应用程序以使用JMS客户端,并从集群中的任何节点提交消息到一个JMS队列,并由集群中特定节点上运行的单个JMS消费者处理。它基本上可以工作,管理/控制单个节点上的JMS消费者不是问题。在同一节点上创建的消息由消费者正确处理。然而,在我的快速研究和测试中,我没有找到一个简单的解决方案来实现我的目标,即在集群中的任何节点上产生的消息由单个JMS消费者处理。

我尝试使用JDBC日志记录,而不是基于文件的日志记录,以为中央数据库将被集群中所有JMS客户端使用,我只需启动单个消费者来处理队列:

                <journal datasource="mariadb" 
                         database="my_db" 
                         messages-table="jms_m"
                         large-messages-table="jms_lm" 
                         page-store-table="jms_ps" 
                         jms-bindings-table="jms_b"/>

这在某种程度上有效,DB表被创建,消息被添加和处理每个单独的节点。因此,使用此配置,我仍然需要每个节点上的节点上的消费者来处理由该节点创建的消息(...并且更多的文档阅读,我认为这个设置是错误的,ActiveMQ期望每个节点有不同的表 - 文档讨论前缀而不是完整的数据库表名,这是可配置的WildFly)。所以无论如何,这不是我想要的。

让所有节点共享相同的表将导致未定义的行为(即它会破坏)。

我如何配置JMS队列(或JMS客户端),以允许我从集群中的任何节点提交消息,并由单个节点上的消费者(按顺序)处理?如果可能的话,我希望避免具体的独立配置(消费者/生产者)。

英文:

I am running a clustered application on WildFly 25 in a Kubernetes cluster. Users are distributed across the cluster via the the ingress. Currently all WildFly nodes in the cluster share the same EAR and have the same configuration, based on standalone-full.xml.

I would like extend the application to use a JMS client and submit messages to a single JMS queue from any node in the cluster and have them processed by a single JMS consumer running on a specific node in the cluster. It basically works, and managing/controlling the JMS consumer on a single node is not a problem. Messages created on the same node are processed correctly by the consumer. However, in my quick research and tests I have not managed to find a simple solution to achieve my goal of messages produced on any node in the cluster being processed by the single JMS consumer.

I tried using the JDBC journal instead of the file-based journal, thinking that the central DB would be used for by all JMS clients in the cluster and I could simply start my single consumer to process the queue:

                <journal datasource="mariadb" 
                         database="my_db" 
                         messages-table="jms_m"
                         large-messages-table="jms_lm" 
                         page-store-table="jms_ps" 
                         jms-bindings-table="jms_b"/>

This worked to the extent that the DB tables were created and the messages were added and processed for each individual node. So with this configuration I still need a consumer per node (on the node) to process messages created by that node (... and reading more docs, I think this setup is wrong and ActiveMQ expects distinct tables per node - the docs discuss a prefix rather than full db table names as is configurable with WildFly). So anyways, this not what I wanted.

Post-solution addition for posterity (as noted in the accepted answer):

> Having all the nodes share the same tables will cause undefined behavior (i.e. it will break).

How can I configure the JMS queue (or JMS client) to allow me to submit messages from from any node in the cluster to be processed (in order) by the consumer on a single node? I was hoping to avoid having specific standalone configurations per node (consumer/producer) if possible.

答案1

得分: 1

你只需要将你的经纪人集群化。WildFly附带了如何在standalone/configuration/standalone-full-ha.xml中执行此操作的示例,例如:

<subsystem xmlns="urn:jboss:domain:messaging-activemq:13.0">
    <server name="default">
        <security elytron-domain="ApplicationDomain"/>
        <cluster password="${jboss.messaging.cluster.password:CHANGE ME!!}"/>
        <statistics enabled="${wildfly.messaging-activemq.statistics-enabled:${wildfly.statistics-enabled:false}}"/>
        <security-setting name="#">
            <role name="guest" send="true" consume="true" create-non-durable-queue="true" delete-non-durable-queue="true"/>
        </security-setting>
        <address-setting name="#">
            <dead-letter-address>jms.queue.DLQ</dead-letter-address>
            <expiry-address>jms.queue.ExpiryQueue</expiry-address>
            <max-size-bytes>10485760</max-size-bytes>
            <page-size-bytes>2097152</page-size-bytes>
            <message-counter-history-day-limit>10</message-counter-history-day-limit>
            <redistribution-delay>1000</redistribution-delay>
        </address-setting>
        <http-connector name="http-connector" socket-binding="http" endpoint="http-acceptor"/>
        <http-connector name="http-connector-throughput" socket-binding="http" endpoint="http-acceptor-throughput">
            <param name="batch-delay" value="50"/>
        </http-connector>
        <in-vm-connector name="in-vm" server-id="0">
            <param name="buffer-pooling" value="false"/>
        </in-vm-connector>
        <http-acceptor name="http-acceptor" http-listener="default"/>
        <http-acceptor name="http-acceptor-throughput" http-listener="default">
            <param name="batch-delay" value="50"/>
            <param name="direct-deliver" value="false"/>
        </http-acceptor>
        <in-vm-acceptor name="in-vm" server-id="0">
            <param name="buffer-pooling" value="false"/>
        </in-vm-acceptor>
        <jgroups-broadcast-group name="bg-group1" jgroups-cluster="activemq-cluster" connectors="http-connector"/>
        <jgroups-discovery-group name="dg-group1" jgroups-cluster="activemq-cluster"/>
        <cluster-connection name="my-cluster" address="jms" connector-name="http-connector" discovery-group="dg-group1"/>
        <jms-queue name="ExpiryQueue" entries="java:/jms/queue/ExpiryQueue"/>
        <jms-queue name="DLQ" entries="java:/jms/queue/DLQ"/>
        <connection-factory name="InVmConnectionFactory" entries="java:/ConnectionFactory" connectors="in-vm"/>
        <connection-factory name="RemoteConnectionFactory" entries="java:jboss/exported/jms/RemoteConnectionFactory" connectors="http-connector" ha="true" block-on-acknowledge="true" reconnect-attempts="-1"/>
        <pooled-connection-factory name="activemq-ra" entries="java:/JmsXA java:jboss/DefaultJMSConnectionFactory" connectors="in-vm" transaction="xa"/>
    </server>
</subsystem>

请注意,clusterjgroups-broadcast-groupjgroups-discovery-groupcluster-connection 不是默认的 standalone-full.xml 的一部分。

关于“有序”,我不太确定你所说的具体意思,因为在集群中,所有节点之间没有严格的顺序保证,因为集群涉及在节点之间在后台传输消息。

最后,你绝对不需要为所有集群节点使用集中式数据库。基于文件的日志是推荐的方法。让所有节点共享相同的表会导致未定义的行为(即会导致故障)。

英文:

You just need to cluster your brokers. WildFly ships with an example of how to do this in standalone/configuration/standalone-full-ha.xml, e.g.:

        &lt;subsystem xmlns=&quot;urn:jboss:domain:messaging-activemq:13.0&quot;&gt;
            &lt;server name=&quot;default&quot;&gt;
                &lt;security elytron-domain=&quot;ApplicationDomain&quot;/&gt;
                &lt;cluster password=&quot;${jboss.messaging.cluster.password:CHANGE ME!!}&quot;/&gt;
                &lt;statistics enabled=&quot;${wildfly.messaging-activemq.statistics-enabled:${wildfly.statistics-enabled:false}}&quot;/&gt;
                &lt;security-setting name=&quot;#&quot;&gt;
                    &lt;role name=&quot;guest&quot; send=&quot;true&quot; consume=&quot;true&quot; create-non-durable-queue=&quot;true&quot; delete-non-durable-queue=&quot;true&quot;/&gt;
                &lt;/security-setting&gt;
                &lt;address-setting name=&quot;#&quot; dead-letter-address=&quot;jms.queue.DLQ&quot; expiry-address=&quot;jms.queue.ExpiryQueue&quot; max-size-bytes=&quot;10485760&quot; page-size-bytes=&quot;2097152&quot; message-counter-history-day-limit=&quot;10&quot; redistribution-delay=&quot;1000&quot;/&gt;
                &lt;http-connector name=&quot;http-connector&quot; socket-binding=&quot;http&quot; endpoint=&quot;http-acceptor&quot;/&gt;
                &lt;http-connector name=&quot;http-connector-throughput&quot; socket-binding=&quot;http&quot; endpoint=&quot;http-acceptor-throughput&quot;&gt;
                    &lt;param name=&quot;batch-delay&quot; value=&quot;50&quot;/&gt;
                &lt;/http-connector&gt;
                &lt;in-vm-connector name=&quot;in-vm&quot; server-id=&quot;0&quot;&gt;
                    &lt;param name=&quot;buffer-pooling&quot; value=&quot;false&quot;/&gt;
                &lt;/in-vm-connector&gt;
                &lt;http-acceptor name=&quot;http-acceptor&quot; http-listener=&quot;default&quot;/&gt;
                &lt;http-acceptor name=&quot;http-acceptor-throughput&quot; http-listener=&quot;default&quot;&gt;
                    &lt;param name=&quot;batch-delay&quot; value=&quot;50&quot;/&gt;
                    &lt;param name=&quot;direct-deliver&quot; value=&quot;false&quot;/&gt;
                &lt;/http-acceptor&gt;
                &lt;in-vm-acceptor name=&quot;in-vm&quot; server-id=&quot;0&quot;&gt;
                    &lt;param name=&quot;buffer-pooling&quot; value=&quot;false&quot;/&gt;
                &lt;/in-vm-acceptor&gt;
                &lt;jgroups-broadcast-group name=&quot;bg-group1&quot; jgroups-cluster=&quot;activemq-cluster&quot; connectors=&quot;http-connector&quot;/&gt;
                &lt;jgroups-discovery-group name=&quot;dg-group1&quot; jgroups-cluster=&quot;activemq-cluster&quot;/&gt;
                &lt;cluster-connection name=&quot;my-cluster&quot; address=&quot;jms&quot; connector-name=&quot;http-connector&quot; discovery-group=&quot;dg-group1&quot;/&gt;
                &lt;jms-queue name=&quot;ExpiryQueue&quot; entries=&quot;java:/jms/queue/ExpiryQueue&quot;/&gt;
                &lt;jms-queue name=&quot;DLQ&quot; entries=&quot;java:/jms/queue/DLQ&quot;/&gt;
                &lt;connection-factory name=&quot;InVmConnectionFactory&quot; entries=&quot;java:/ConnectionFactory&quot; connectors=&quot;in-vm&quot;/&gt;
                &lt;connection-factory name=&quot;RemoteConnectionFactory&quot; entries=&quot;java:jboss/exported/jms/RemoteConnectionFactory&quot; connectors=&quot;http-connector&quot; ha=&quot;true&quot; block-on-acknowledge=&quot;true&quot; reconnect-attempts=&quot;-1&quot;/&gt;
                &lt;pooled-connection-factory name=&quot;activemq-ra&quot; entries=&quot;java:/JmsXA java:jboss/DefaultJMSConnectionFactory&quot; connectors=&quot;in-vm&quot; transaction=&quot;xa&quot;/&gt;
            &lt;/server&gt;
        &lt;/subsystem&gt;

Note the cluster, jgroups-broadcast-group, jgroups-discovery-group, and cluster-connection which are not part of the default standalone-full.xml.

I'm not exactly sure what you mean by "in order," but there are no strict ordering guarantees among all the nodes in the cluster since clustering involves moving messages between nodes behind-the-scenes.

Lastly, you definitely don't need to use a centralized database for all the cluster nodes. The file-based journal is the recommended approach. Having all the nodes share the same tables will cause undefined behavior (i.e. it will break).

huangapple
  • 本文由 发表于 2023年7月3日 21:37:56
  • 转载请务必保留本文链接:https://go.coder-hub.com/76605283.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定