ActiveMQ prefetchPolicy of 1 causes messages to timeout, while 0 appears to work. Misunderstanding of concept?

huangapple go评论49阅读模式
英文:

ActiveMQ prefetchPolicy of 1 causes messages to timeout, while 0 appears to work. Misunderstanding of concept?

问题

我们有一些消息队列,其中一些消息可能需要毫秒级别的处理,而其他消息可能需要几分钟(即既有快速又有慢速消息)。我们一直遇到的问题是,尽管有大量的消费者可用,但由于超时(TTL内没有可用的消费者),消息被丢弃。

因此,我们在所有消费者的连接字符串中使用 jms.prefetchPolicy.all=1。选择这个值是基于这个信息

> 对于高消息量的高性能,建议使用较大的预取值。然而,对于低消息量的情况,其中每个消息需要很长时间来处理,预取应设置为1。这确保消费者一次只处理一条消息。然而,指定预取限制为零将导致消费者轮询消息,一次只处理一条消息,而不是将消息推送到消费者。

然而,我们仍然遇到问题。作为一个测试,我改变了这个值为 0,并在使用了这个配置约两周后,我们还没有看到丢失的消息。以前每天会发生多次丢失。

也许我误解了文档,但我的最终目标是在消费者真正可用之前不应将任何消息交给它。但是,预取值为 1 可能意味着即使它正在处理某些内容,仍然 可能 会将一条消息交给消费者?

> 然而,指定预取限制为零将导致消费者轮询消息,一次只处理一条消息,而不是将消息推送到消费者。

这一定是一件坏事吗?文档似乎将其描述为要避免的事情(轮询不好,推送好)。也许轮询是唯一的工作方式,因为只有工作程序/消费者知道何时准备好处理?

作为一种替代解决方案,也许将“快速”和“慢速”消息混合在同一个队列中是一个不好的做法,但除非必要,否则我宁愿不进行架构更改。

英文:

We have queues where some message may take milliseconds to process and some minutes (i.e. both fast and slow messages). A problem we have been seeing is that messages get dropped due to timeout (no consumer available within TTL) even though there are plenty of consumers available.

For this reason we use jms.prefetchPolicy.all=1 as part of the connection string for all consumers. This value was chosen from this information:

> Large prefetch values are recommended for high performance with high
> message volumes. However, for lower message volumes, where each
> message takes a long time to process, the prefetch should be set to 1.
> This ensures that a consumer is only processing one message at a time.
> Specifying a prefetch limit of zero, however, will cause the consumer
> to poll for messages, one at a time, instead of the message being
> pushed to the consumer.

However, we still see the problem. As a test I instead changed the value to 0 and after having used this configuration for about two weeks we are yet set to see dropped messages. Previously it would happen several times per day.

Perhaps I'm misunderstanding the documentation, but my end-goal was that no messages should be given to a consumer until it's actually available. But a prefetch value of 1 perhaps means that there may be a single message given to a consumer, even though it's processing something?

> Specifying a prefetch limit of zero, however, will cause the consumer to poll for messages, one at a time, instead of the message being pushed to the consumer.

Is this necessarily a bad thing? The documentation make it out to be something to avoid (poll bad, push good). Perhaps polling is the only way it can work because only the worker/consumer knows when it's ready to process?

As an alternative solution, perhaps it's bad practice to mix "fast" and "slow" messages on the same queue, but I'd rather not make architectural changes unless necessary.

答案1

得分: 1

文档有些误导,因为使用预取值为 0 并轮询消息并非一件坏事,如果它确实给您所需的行为。

一般来说,预取是一种性能优化,以避免消费者轮询代理,因为重复的网络往返以获取每条消息会随着时间累积,特别是如果消费者很快的话。然而,该值是可配置的,因为并非所有用例都相同。如果您的一些消费者处于饥饿状态并且消息超时,则请降低预取值,直到一切按照您的预期工作为止。

通常将快速和慢速消费者/消息隔离在不同的队列中会更简单,但这不是必需的。如果变异性存在于消费者本身中,那么每个消费者可以有自己的预取值。只要调整得当,就不会有消费者会饿死,性能应该接近最佳状态。如果消息的变异性在消息本身中,则必须使用“最低公分母”预取值,这意味着性能可能不会最优,但至少不会有消费者会饿死。

英文:

The documentation is somewhat misleading because using a prefetch of 0 and polling for messages is not a bad thing if it gives you the behavior you actually want.

Generally speaking, prefetch is a performance optimization to avoid consumers polling the broker since repeated network round-trips to fetch each message can add up over time, especially if the consumer is fast. However, the value is configurable exactly because not all use-cases are the same. If some of your consumers are starving and messages are timing out then by all means lower the prefetch until everything is working as you expect.

It is usually simpler to segregate fast and slow consumers/messages on different queues, but it's not required. If the variability is in the consumers themselves then each consumer can have its own prefetch value. As long as it's adjusted properly no consumer should starve and performance should be close to optimal. If message variability is instead in the message then you'll have to use a "lowest common denominator" prefetch value which means performance won't be optimal, but at least no consumers will starve.

huangapple
  • 本文由 发表于 2023年7月10日 20:48:44
  • 转载请务必保留本文链接:https://go.coder-hub.com/76653908.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定