英文:
Kafka streams aggregation RecordTooLargeException
问题
尝试使用5分钟的滚动窗口从单个流中聚合数据。最初运行正常,能够打印出聚合的记录。使用Java 8进行开发。
但后来开始收到错误 -
"org.apache.kafka.common.errors.RecordTooLargeException: 序列化后的消息大小为5292482字节,大于1048576的max.request.size配置值"
现在每次在EKS集群中启动我的应用后,都会在一分钟内崩溃,出现相同的错误。
尝试设置了以下流配置,但它们也没有帮助:
- StreamsConfig.RECEIVE_BUFFER_CONFIG(50 MB)
- StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG(50 MB)
以下哪项可以解决这个问题:
- 使用inMemoryKeyValueStore。是否有特定的属性来为inMemoryKeyValueStore分配一些内存?
- 还是应该切换到persistentKeyValueStore?
- 使用AWS MSK,在创建集群时,使用适当的值为broker和topic级别设置message.max.bytes。
提前感谢您的帮助。
英文:
Trying to aggregate data from a single stream with tumbling windows of 5 minutes. Initially it was working fine and was able to print aggregated records. Using Java 8 for development.
But later started receiving an error -
"org.apache.kafka.common.errors.RecordTooLargeException: The message is 5292482 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration"
AND now every time, after starting my app in EKS cluster, crashes within a minute, with the same error.
Tried to set following streamconfigs but they also not helped:
StreamsConfig.RECEIVE_BUFFER_CONFIG (50 MB)
StreamsConfig.CACHE_MAX_BYTES_BUFFERING_CONFIG (50 MB)
Can anyone of the following will resolve this issue:
- Using inMemoryKeyValueStore. is there any specific property to allocate some memory for inMemoryKeyValueStore?
- OR should switch to persistentKeyValueStore?
- Using AWS MSK so while creating cluster, define broker and topic level setting - message.max.bytes with appropriate value.
Thanks in advance.
答案1
得分: 1
这是生产者配置中的max.request.size
(如错误消息中所提到的),您需要增加它以解决这个问题。
请注意,您可能还需要增加您提到的经纪人/主题配置中的message.max.bytes
。
英文:
It's the producer config max.request.size
(as mentioned in the error message) that you need to increase to address the issue.
Note that you might need to increase the broker/topic config message.max.bytes
that you mentioned in addition.
通过集体智慧和协作来改善编程学习和解决问题的方式。致力于成为全球开发者共同参与的知识库,让每个人都能够通过互相帮助和分享经验来进步。
评论