Kafka AdminClientConfig 忽略了提供的配置

huangapple go评论66阅读模式
英文:

Kafka AdminClientConfig ignoring provided configuration

问题

我有一个基于微服务的Java(Spring Boot)应用程序,我正在集成Kafka以进行事件驱动的内部服务通信。这些服务在同一个Docker Compose网络下运行。我已将cp-kafka添加到Docker Compose中,并将其放在相同的网络中。

我的问题是,一旦启动Docker Compose,生产者和消费者都无法连接到代理。问题在于AdminClientConfig使用localhost:9092而不是我在代理配置中明确定义的kafka:9092作为广告监听器。

这是我在生产者端收到的输出:

2023-02-14 13:09:17.563  INFO [article-service,,] 1 --- [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
2023-02-14T13:09:17.563482000Z 	bootstrap.servers = [localhost:9092]
2023-02-14T13:09:17.563518700Z 	client.dns.lookup = use_all_dns_ips
2023-02-14T13:09:17.563524100Z 	client.id = 
2023-02-14T13:09:17.563528200Z 	connections.max.idle.ms = 300000
...

消费者将使用我提供的ConsumerConfig短暂连接:

2023-02-14 13:10:13.358  INFO 1 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
        allow.auto.create.topics = true
        auto.commit.interval.ms = 5000
        auto.offset.reset = earliest
        bootstrap.servers = [kafka:9092]
        check.crcs = true
        client.dns.lookup = use_all_dns_ips
        client.id = consumer-saveArticle-1
        client.rack = 
        connections.max.idle.ms = 540000
...

然而,随后它会重试,这次使用AdminClientConfig:

2023-02-14 13:10:42.365  INFO 1 --- [   scheduling-1] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
        bootstrap.servers = [localhost:9092]
        client.dns.lookup = use_all_dns_ips
        client.id = 
        connections.max.idle.ms = 300000

docker-compose.yml的相关部分

...
networks:
  backend:
    name: backend
    driver: bridge

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    container_name: dev.infra.zookeeper
    networks:
      - backend
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:7.3.0
    container_name: kafka
    networks:
      - backend
    ports:
      - "9092:9092"
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: 'zookeeper:2181'
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
      KAFKA_LISTENERS: PLAINTEXT://:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      AUTO_CREATE_TOPICS_ENABLE: 'false'
...

生产者的application.yml

spring:
  kafka:
    producer:
      bootstrap-servers: kafka:9092
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
    topic:
      name: saveArticle

消费者的application.yml

spring:
  kafka:
    consumer:
      bootstrap-servers: kafka:9092
      auto-offset-reset: earliest
      group-id: stock
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
    topic:
      name: saveArticle

我正在使用的Kafka依赖项:

		<dependency>
			<groupId>org.springframework.cloud</groupId>
			<artifactId>spring-cloud-stream-binder-kafka</artifactId>
		</dependency>
		<dependency>
			<groupId>org.springframework.kafka</groupId>
			<artifactId>spring-kafka</artifactId>
			<version>3.0.2</version>
			<type>pom</type>
		</dependency>

从哪里获取localhost:9092,以及为什么它忽略了我在代理配置中明确定义的kafka:9092主机?如何解决我的问题?

英文:

I have a microservice-based Java (Spring Boot) application where I'm integrating Kafka for event-driven internal service communication. Services are running inside a docker-compose all under the same bridged network. I've added cp-kafka into that docker-compose again under the same network.

My problem is that once I start the docker-compose neither the producer nor consumer would connect to the broker. What happens is the AdminClientConfig uses localhost:9092 rather than the kafka:9092 I've defined as advertised listener in the Broker configuration.

This is the output I get at the producer:

2023-02-14 13:09:17.563  INFO [article-service,,] 1 --- [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
2023-02-14T13:09:17.563482000Z 	bootstrap.servers = [localhost:9092]
2023-02-14T13:09:17.563518700Z 	client.dns.lookup = use_all_dns_ips
2023-02-14T13:09:17.563524100Z 	client.id = 
2023-02-14T13:09:17.563528200Z 	connections.max.idle.ms = 300000
...

The consumer would briefly connect using the ConsumerConfig I've provided:

2023-02-14 13:10:13.358  INFO 1 --- [           main] o.a.k.clients.consumer.ConsumerConfig    : ConsumerConfig values: 
        allow.auto.create.topics = true
        auto.commit.interval.ms = 5000
        auto.offset.reset = earliest
        bootstrap.servers = [kafka:9092]
        check.crcs = true
        client.dns.lookup = use_all_dns_ips
        client.id = consumer-saveArticle-1
        client.rack = 
        connections.max.idle.ms = 540000
...

However, right after that it'd retry, this time using the AdminClientConfig instead

2023-02-14 13:10:42.365  INFO 1 --- [   scheduling-1] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
        bootstrap.servers = [localhost:9092]
        client.dns.lookup = use_all_dns_ips
        client.id = 
        connections.max.idle.ms = 300000

Relevant parts of docker-compose.yml

...
networks:
  backend:
    name: backend
    driver: bridge

services:
  zookeeper:
    image: confluentinc/cp-zookeeper:7.3.0
    container_name: dev.infra.zookeeper
    networks:
      - backend
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181
      ZOOKEEPER_TICK_TIME: 2000

  kafka:
    image: confluentinc/cp-kafka:7.3.0
    container_name: kafka
    networks:
      - backend
    ports:
      - &quot;9092:9092&quot;
    depends_on:
      - zookeeper
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: &#39;zookeeper:2181&#39;
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
      KAFKA_LISTENERS: PLAINTEXT://:9092
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      AUTO_CREATE_TOPICS_ENABLE: &#39;false&#39;
...

Producer application.yml

spring:
  kafka:
    producer:
      bootstrap-servers: kafka:9092
      value-serializer: org.apache.kafka.common.serialization.StringSerializer
      key-serializer: org.apache.kafka.common.serialization.StringSerializer
    topic:
      name: saveArticle

Consumer application.yml

spring:
  kafka:
    consumer:
      bootstrap-servers: kafka:9092
      auto-offset-reset: earliest
      group-id: stock
      key-deserializer: org.apache.kafka.common.serialization.StringDeserializer
      value-deserializer: org.apache.kafka.common.serialization.StringDeserializer
    topic:
      name: saveArticle

Kafka dependencies I'm using:

		&lt;dependency&gt;
			&lt;groupId&gt;org.springframework.cloud&lt;/groupId&gt;
			&lt;artifactId&gt;spring-cloud-stream-binder-kafka&lt;/artifactId&gt;
		&lt;/dependency&gt;
		&lt;dependency&gt;
			&lt;groupId&gt;org.springframework.kafka&lt;/groupId&gt;
			&lt;artifactId&gt;spring-kafka&lt;/artifactId&gt;
			&lt;version&gt;3.0.2&lt;/version&gt;
			&lt;type&gt;pom&lt;/type&gt;
		&lt;/dependency&gt;

Any clue into where it's getting the localhost:9092 from and why is it ignoring the explicitly specified kafka:9092 host I've provided in the broker config? How can I resolve my problem?

答案1

得分: 3

只需要一个 YAML 文件用于一个应用程序。

错误是因为您没有设置 spring.kafka.bootstrap-servers=kafka:9092,只设置了producerconsumer客户端,因此与代理所宣传的无关,而是与 spring-kafka 默认值有关。

您可以添加 spring.kafka.admin 部分,但最好不要重复不必要的配置。

https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka

然而,如果您尝试在本机上运行此代码,您 必须 宣传 localhost:9092,否则您将遇到 UnknownHostException: kafka

英文:

You only need one yaml file for one application.

The error is because you're not setting spring.kafka.bootstrap-servers=kafka:9092, only the producer and consumer client, individually, therefore, has nothing to do with what's advertised by the broker, but rather spring-kafka default values.

You can add spring.kafka.admin section, but better not to duplicate unnecessary config

https://docs.spring.io/spring-boot/docs/current/reference/html/messaging.html#messaging.kafka

However, you will need to advertise localhost:9092 if you're trying to run this code on your host machine, otherwise, you'll end up with UnknownHostException: kafka

huangapple
  • 本文由 发表于 2023年2月14日 21:40:34
  • 转载请务必保留本文链接:https://go.coder-hub.com/75448681.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定