@KafkaListener有没有更好的解决方案来消费一天内超过一百万条的消息?

huangapple go评论82阅读模式
英文:

@KafkaListener better solution to consume one million plus messages a day?

问题

我在Spring Boot项目中有一个Spring KafkaListener:

@KafkaListener(topics = "topic-one", groupId = "groupone")
public void listen(CustomerDetails customerDetails) {
    if(customerDetails.getCertainDetails != null && !customerDetails.getCertainDetails.isEmpty()) {
        dbInsert;
    } else {
        log.info(customerDetails.toString);
    }
}

这个监听器每天会接收一百万条以上的消息。在处理大量消息并进行数据库插入时,如何确保不会遇到并发问题?或者我是否不需要担心这个问题?在上述代码方法中是否有更好的解决方案?

英文:

I have a spring KafkaListener in a spring boot project:

@KafkaListener(topics = "topic-one", groupId = "groupone")
public void listen(CustomerDetails customerDetails) {
    if(customerDetails.getCertainDetails!= null && !customerDetails.getCertainDetails.isEmpty()) {
        dbInsert;
} else {
 log.info(customerDetails.toString)

}
}

This listener will be receiving one million plus messages a day.
How do i ensure that i dont run into concurrency issue while too many messages are coming in and db insertion? Or i do not need to worry about it?
Is there a better solution for the above code approach?

答案1

得分: 1

> 如何确保在大量消息到达并且进行数据库插入时不会遇到并发问题

除非您的数据库客户端以异步方式运行,否则您可能不会遇到该问题。KafkaListener 是阻塞的,您可以在 Kafka 消费者属性中配置自己的 max.poll.records 设置来处理背压;您添加的记录数永远不会超过这个限制。

您目前是否看到任何表明这可能是事实的情况?

> 是否有更好的解决方案

总的来说,是的。这要求您不要自行管理 Kafka 消费者,而是使用适用于您相应数据库的 Kafka 连接器

英文:

> How do i ensure that i dont run into concurrency issue while too many messages are coming in and db insertion

Unless your databse client runs asynchronously, you likely would not run into that problem. The KafkaListener is blocking, and you configure your own max.poll.records setting in Kafka consumer properties to handle the backpressure; you would never be adding more records than that at once.

Are you currently seeing anything that indicates that would be the fact?

> Is there a better solution

In general, yes. It requires you to not manage Kafka Consumers on your own and use a Kafka Connect sink for your respective database.

huangapple
  • 本文由 发表于 2020年4月7日 04:16:08
  • 转载请务必保留本文链接:https://go.coder-hub.com/61068239.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定