英文:
DSL Integration Flows with retry mechanism and how it works
问题
我已经实现了一个重试机制,它基于以下内容运行良好:
https://github.com/spring-projects/spring-integration-samples/issues/237
该应用程序从Kafka消费事件,将这些事件转换并将它们作为HTTP请求发送到远程服务,所以在发送HTTP请求的集成流程中实现了重试机制。
在临时故障(网络故障)期间,我担心将请求按照它们从Kafka中接收的顺序发送到远程服务,以避免覆盖,但幸运的是看起来顺序是保持的,请纠正我如果我错了。
在重试过程中,似乎所有进来的事件都被“暂停”,一旦远程服务在最后一次尝试之前恢复正常,所有事件都会被发送。
我想在这里了解两件事:
- 我的假设是否正确?这是否是默认的重试机制工作方式?
- 我担心事件会因为完成当前流程执行所需的时间而积累起来。在这方面是否有什么我应该考虑的事情?
我认为我可以使用ExecutorChannel,以便事件可以并行处理,但这样做会导致无法保持事件的顺序。
谢谢。
英文:
I have implemented a retry mechanism which works well based on the following:
https://github.com/spring-projects/spring-integration-samples/issues/237
The application consumes events from kafka, transforms those events and sends them as an HTTP request to a remote service, so it's in the integration flow that sends the HTTP request where the retry mechanism is implemented.
I was worried about sending the requests to the remote service in the same order as they come in from kafka during a temporary failure (network glitch) to avoid an overriding, but fortunately it looks like the order is kept, keep me honest here.
It seems that during the retry process all events coming in are "put on hold" and once the remote service is back up before the last try, all events are sent.
I would like to know two things here:
- Am I correct with my assumption? Is this how the retry mechanism works by default?
- I'm worried about the events getting back (or stack) up due to the amount of time it takes to finish the current flow execution. Is there something here I should take into consideration?
I think I might use an ExecutorChannel so that events could get processed in parallel, but by doing that I wouldn't be able to keep the order of the events.
Thanks.
答案1
得分: 1
你的假设是正确的。重试是在同一个线程中完成的,直到发送成功或重试用尽之前都会被阻塞,而且确实是在同一个Kafka消费者线程中完成的,因此在重试完成之前不会从主题中拉取新的记录。将逻辑移到新线程中,例如使用ExecutorChannel
,并不是一个正确的架构,因为Kafka基于偏移提交,不能无序执行。
英文:
Your assumption is correct. The retry is done withing the same thread and it is blocked for the next event until the send is successful or retry is exhausted. And it is really done in the same Kafka consumer thread, so new records are not pulled from the topic until retry is done.
It is not a correct architecture to shift the logic into a new thread, e.g. using an ExecutorChannel
since Kafka is based on an offset commit which cannot be done out of order.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论