连接Sprig到Kafka已经开始使用Docker Compose进行本地开发。

huangapple go评论98阅读模式
英文:

connecting sprig to kafka started using docker compose for localhost development

问题

抱歉,您的代码片段很长,翻译整个代码片段可能会导致超出字数限制。请问您是否有特定部分需要翻译或有特定问题需要解答?

英文:

After hours of trying to find a solution for this problem I decided to post it here.

I have the following docker-compose which starts zookeper, 2 kafka brokers and kafdrop

version: '2'

networks:
  kafka-net:
    driver: bridge

services:
  zookeeper-server:
    image: 'bitnami/zookeeper:latest'
    networks:
      - kafka-net
    ports:
      - '2181:2181'
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka-broker-1:
    image: 'bitnami/kafka:latest'
    networks:
      - kafka-net
    ports:
      - '9092:9092'
      - '29092:29092'
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
      - KAFKA_CFG_ADVERTISED_LISTENERS=INSIDE://kafka-broker-1:9092,OUTSIDE://localhost:29092
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      - KAFKA_CFG_LISTENERS=INSIDE://:9092,OUTSIDE://:29092
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INSIDE
      - KAFKA_CFG_BROKER_ID=1
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper-server
  kafka-broker-2:
    image: 'bitnami/kafka:latest'
    networks:
      - kafka-net
    ports:
      - '9093:9092'
      - '29093:29092'
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper-server:2181
      - KAFKA_CFG_ADVERTISED_LISTENERS=INSIDE://kafka-broker-2:9093,OUTSIDE://localhost:29093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      - KAFKA_CFG_LISTENERS=INSIDE://:9093,OUTSIDE://:29093
      - KAFKA_CFG_INTER_BROKER_LISTENER_NAME=INSIDE
      - KAFKA_CFG_BROKER_ID=2
      - ALLOW_PLAINTEXT_LISTENER=yes
    depends_on:
      - zookeeper-server
  kafdrop-web:
    image: obsidiandynamics/kafdrop
    networks:
      - kafka-net
    ports:
      - '9000:9000'
    environment:
      - KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093
    depends_on:
      - kafka-broker-1
      - kafka-broker-2

Then I am trying to connect my Spring command line app to Kafka. I took the quickstart example from spring kafka docs.

Properties file

spring.kafka.bootstrap-servers=localhost:29092,localhost:29093
spring.kafka.consumer.group-id=cg1
spring.kafka.consumer.auto-offset-reset=earliest

logging.level.org.springframework.kafka=debug
logging.level.org.apache.kafka=debug

And the app code

@SpringBootApplication
public class App {
    public static void main(String[] args) {
        SpringApplication.run(App.class, args);
    }
}

@Component
public class Runner implements CommandLineRunner {
    public static Logger logger = LoggerFactory.getLogger(Runner.class);

    @Autowired
    private KafkaTemplate<String, String> template;

    private final CountDownLatch latch = new CountDownLatch(3);

    @Override
    public void run(String... args) throws Exception {
        this.template.send("myTopic", "foo1");
        this.template.send("myTopic", "foo2");
        this.template.send("myTopic", "foo3");
        latch.await(60, TimeUnit.SECONDS);
        logger.info("All received");
    }

    @KafkaListener(topics = "myTopic")
    public void listen(ConsumerRecord<?, ?> cr) throws Exception {
        logger.info(cr.toString());
        latch.countDown();
    }
}

The logs I get when I run the app

2020-04-07 01:06:45.074 DEBUG 7560 --- [           main] KafkaListenerAnnotationBeanPostProcessor : 1 @KafkaListener methods processed on bean 'runner': {public void com.vdt.learningkafka.Runner.listen(org.apache.kafka.clients.consumer.ConsumerRecord) throws java.lang.Exception=[@org.springframework.kafka.annotation.KafkaListener(autoStartup=, beanRef=__listener, clientIdPrefix=, concurrency=, containerFactory=, containerGroup=, errorHandler=, groupId=, id=, idIsGroup=true, properties=[], splitIterables=true, topicPartitions=[], topicPattern=, topics=[myTopic])]}
2020-04-07 01:06:45.284  INFO 7560 --- [           main] o.a.k.clients.admin.AdminClientConfig    : AdminClientConfig values: 
bootstrap.servers = [localhost:29092, localhost:29093]
client.dns.lookup = default
client.id = 
connections.max.idle.ms = 300000
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
receive.buffer.bytes = 65536
reconnect.backoff.max.ms = 1000
reconnect.backoff.ms = 50
request.timeout.ms = 120000
retries = 5
retry.backoff.ms = 100
sasl.client.callback.handler.class = null
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.login.callback.handler.class = null
sasl.login.class = null
sasl.login.refresh.buffer.seconds = 300
sasl.login.refresh.min.period.seconds = 60
sasl.login.refresh.window.factor = 0.8
sasl.login.refresh.window.jitter = 0.05
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = https
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
2020-04-07 01:06:45.305 DEBUG 7560 --- [           main] o.a.k.c.a.i.AdminMetadataManager         : [AdminClient clientId=adminclient-1] Setting bootstrap cluster metadata Cluster(id = null, nodes = [localhost:29093 (id: -2 rack: null), localhost:29092 (id: -1 rack: null)], partitions = [], controller = null).
2020-04-07 01:06:45.411 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name connections-closed:
2020-04-07 01:06:45.413 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name connections-created:
2020-04-07 01:06:45.414 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name successful-authentication:
2020-04-07 01:06:45.414 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name successful-reauthentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name successful-authentication-no-reauth:
2020-04-07 01:06:45.415 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name failed-authentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name failed-reauthentication:
2020-04-07 01:06:45.415 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name reauthentication-latency:
2020-04-07 01:06:45.416 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name bytes-sent-received:
2020-04-07 01:06:45.416 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name bytes-sent:
2020-04-07 01:06:45.417 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name bytes-received:
2020-04-07 01:06:45.418 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name select-time:
2020-04-07 01:06:45.419 DEBUG 7560 --- [           main] org.apache.kafka.common.metrics.Metrics  : Added sensor with name io-time:
2020-04-07 01:06:45.428  INFO 7560 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka version: 2.3.1
2020-04-07 01:06:45.428  INFO 7560 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka commitId: 18a913733fb71c01
2020-04-07 01:06:45.428  INFO 7560 --- [           main] o.a.kafka.common.utils.AppInfoParser     : Kafka startTimeMs: 1586210805426
2020-04-07 01:06:45.430 DEBUG 7560 --- [           main] o.a.k.clients.admin.KafkaAdminClient     : [AdminClient clientId=adminclient-1] Kafka admin client initialized
2020-04-07 01:06:45.433 DEBUG 7560 --- [           main] o.a.k.clients.admin.KafkaAdminClient     : [AdminClient clientId=adminclient-1] Queueing Call(callName=describeTopics, deadlineMs=1586210925432) with a timeout 120000 ms from now.
2020-04-07 01:06:45.434 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: -2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.442 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--2.bytes-sent
2020-04-07 01:06:45.443 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--2.bytes-received
2020-04-07 01:06:45.443 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--2.latency
2020-04-07 01:06:45.445 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -2
2020-04-07 01:06:45.569 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Completed connection to node -2. Fetching API versions.
2020-04-07 01:06:45.569 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -2.
2020-04-07 01:06:45.576 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
2020-04-07 01:06:45.577 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Node -2 disconnected.
2020-04-07 01:06:45.578 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29092 (id: -1 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.578 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--1.bytes-sent
2020-04-07 01:06:45.579 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--1.bytes-received
2020-04-07 01:06:45.579 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node--1.latency
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node -1
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Completed connection to node -1. Fetching API versions.
2020-04-07 01:06:45.580 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node -1.
2020-04-07 01:06:45.586 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Recorded API versions for node -1: (Produce(0): 0 to 8 [usable: 7], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 8], LeaderAndIsr(4): 0 to 4 [usable: 2], StopReplica(5): 0 to 2 [usable: 1], UpdateMetadata(6): 0 to 6 [usable: 5], ControlledShutdown(7): 0 to 3 [usable: 2], OffsetCommit(8): 0 to 8 [usable: 7], OffsetFetch(9): 0 to 6 [usable: 5], FindCoordinator(10): 0 to 3 [usable: 2], JoinGroup(11): 0 to 6 [usable: 5], Heartbeat(12): 0 to 4 [usable: 3], LeaveGroup(13): 0 to 4 [usable: 2], SyncGroup(14): 0 to 4 [usable: 3], DescribeGroups(15): 0 to 5 [usable: 3], ListGroups(16): 0 to 3 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 5 [usable: 3], DeleteTopics(20): 0 to 4 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 1], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], ElectPreferredLeaders(43): 0 to 2 [usable: 0], IncrementalAlterConfigs(44): 0 to 1 [usable: 0], UNKNOWN(45): 0, UNKNOWN(46): 0, UNKNOWN(47): 0)
2020-04-07 01:06:45.594 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager         : [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = gSypiCeoSlyuSR4ks5qwwA, nodes = [localhost:29092 (id: 1 rack: null), localhost:29093 (id: 2 rack: null)], partitions = [], controller = localhost:29093 (id: 2 rack: null))
2020-04-07 01:06:45.595 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: 2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.596 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node-2.bytes-sent
2020-04-07 01:06:45.597 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node-2.bytes-received
2020-04-07 01:06:45.597 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node-2.latency
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Completed connection to node 2. Fetching API versions.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 2.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Node 2 disconnected.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager         : [AdminClient clientId=adminclient-1] Requesting metadata update.
2020-04-07 01:06:45.598 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29092 (id: 1 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.599 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node-1.bytes-sent
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node-1.bytes-received
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.common.metrics.Metrics  : Added sensor with name node-1.latency
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 1
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Completed connection to node 1. Fetching API versions.
2020-04-07 01:06:45.600 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 1.
2020-04-07 01:06:45.602 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Recorded API versions for node 1: (Produce(0): 0 to 8 [usable: 7], Fetch(1): 0 to 11 [usable: 11], ListOffsets(2): 0 to 5 [usable: 5], Metadata(3): 0 to 9 [usable: 8], LeaderAndIsr(4): 0 to 4 [usable: 2], StopReplica(5): 0 to 2 [usable: 1], UpdateMetadata(6): 0 to 6 [usable: 5], ControlledShutdown(7): 0 to 3 [usable: 2], OffsetCommit(8): 0 to 8 [usable: 7], OffsetFetch(9): 0 to 6 [usable: 5], FindCoordinator(10): 0 to 3 [usable: 2], JoinGroup(11): 0 to 6 [usable: 5], Heartbeat(12): 0 to 4 [usable: 3], LeaveGroup(13): 0 to 4 [usable: 2], SyncGroup(14): 0 to 4 [usable: 3], DescribeGroups(15): 0 to 5 [usable: 3], ListGroups(16): 0 to 3 [usable: 2], SaslHandshake(17): 0 to 1 [usable: 1], ApiVersions(18): 0 to 3 [usable: 2], CreateTopics(19): 0 to 5 [usable: 3], DeleteTopics(20): 0 to 4 [usable: 3], DeleteRecords(21): 0 to 1 [usable: 1], InitProducerId(22): 0 to 2 [usable: 1], OffsetForLeaderEpoch(23): 0 to 3 [usable: 3], AddPartitionsToTxn(24): 0 to 1 [usable: 1], AddOffsetsToTxn(25): 0 to 1 [usable: 1], EndTxn(26): 0 to 1 [usable: 1], WriteTxnMarkers(27): 0 [usable: 0], TxnOffsetCommit(28): 0 to 2 [usable: 2], DescribeAcls(29): 0 to 1 [usable: 1], CreateAcls(30): 0 to 1 [usable: 1], DeleteAcls(31): 0 to 1 [usable: 1], DescribeConfigs(32): 0 to 2 [usable: 2], AlterConfigs(33): 0 to 1 [usable: 1], AlterReplicaLogDirs(34): 0 to 1 [usable: 1], DescribeLogDirs(35): 0 to 1 [usable: 1], SaslAuthenticate(36): 0 to 1 [usable: 1], CreatePartitions(37): 0 to 1 [usable: 1], CreateDelegationToken(38): 0 to 2 [usable: 1], RenewDelegationToken(39): 0 to 1 [usable: 1], ExpireDelegationToken(40): 0 to 1 [usable: 1], DescribeDelegationToken(41): 0 to 1 [usable: 1], DeleteGroups(42): 0 to 2 [usable: 1], ElectPreferredLeaders(43): 0 to 2 [usable: 0], IncrementalAlterConfigs(44): 0 to 1 [usable: 0], UNKNOWN(45): 0, UNKNOWN(46): 0, UNKNOWN(47): 0)
2020-04-07 01:06:45.605 DEBUG 7560 --- [| adminclient-1] o.a.k.c.a.i.AdminMetadataManager         : [AdminClient clientId=adminclient-1] Updating cluster metadata to Cluster(id = gSypiCeoSlyuSR4ks5qwwA, nodes = [localhost:29092 (id: 1 rack: null), localhost:29093 (id: 2 rack: null)], partitions = [], controller = localhost:29093 (id: 2 rack: null))
2020-04-07 01:06:45.639 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating connection to node localhost:29093 (id: 2 rack: null) using address localhost/127.0.0.1
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Created socket with SO_RCVBUF = 65536, SO_SNDBUF = 131072, SO_TIMEOUT = 0 to node 2
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Completed connection to node 2. Fetching API versions.
2020-04-07 01:06:45.640 DEBUG 7560 --- [| adminclient-1] org.apache.kafka.clients.NetworkClient   : [AdminClient clientId=adminclient-1] Initiating API versions fetch from node 2.
2020-04-07 01:06:45.641 DEBUG 7560 --- [| adminclient-1] o.apache.kafka.common.network.Selector   : [AdminClient clientId=adminclient-1] Connection with localhost/127.0.0.1 disconnected
java.io.EOFException: null
at org.apache.kafka.common.network.NetworkReceive.readFrom(NetworkReceive.java:96) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.receive(KafkaChannel.java:424) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.KafkaChannel.read(KafkaChannel.java:385) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.attemptRead(Selector.java:651) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.pollSelectionKeys(Selector.java:572) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.common.network.Selector.poll(Selector.java:483) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:539) ~[kafka-clients-2.3.1.jar:na]
at org.apache.kafka.clients.admin.KafkaAdminClient$AdminClientRunnable.run(KafkaAdminClient.java:1152) ~[kafka-clients-2.3.1.jar:na]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]

The configuration from docker seems to be fine since Kafdrop successfully connects brokers and also, from what I get from the logs, the spring application manages to connect to the brokers but immediately after that the connection is getting closed.

答案1

得分: 1

连接基于Docker的应用程序与运行在Docker容器内的Kafka时,必须使用容器的名称(用作其地址的别名)来引用Kafka容器:

spring.kafka.bootstrap-servers=kafka-broker-1:29092,kafka-broker-2:29093

如果尝试将Docker容器内的应用程序连接到localhost:29092,它会尝试连接到同一容器的本地主机,而不是外部网络。

从Docker的配置来看,Kafdrop成功连接了代理

是的,检查docker-compose.yml中的kafdrop-web部分,并查看它如何连接到Kafka代理。这里使用了容器的名称:

KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093
英文:

To connect a Docker-based application with Kafka, also running inside the Docker container, you have to refer the the name (alias for its address) of the container with Kafka:

spring.kafka.bootstrap-servers=kafka-broker-1:29092,kafka-broker-2:29093

If you try to connect the application inside the Docker container to localhost:29092, it tries to connect to the localhost of the very same container and not the outer network.

> The configuration from docker seems to be fine since Kafdrop successfully connects brokers

Yes, check the kafdrop-web inside the docker-compose.yml and take a look how it connects to the Kafka broker. There are used the names of the containers:

KAFKA_BROKERCONNECT=kafka-broker-1:9092,kafka-broker-2:9093

huangapple
  • 本文由 发表于 2020年4月7日 06:14:06
  • 转载请务必保留本文链接:https://go.coder-hub.com/61069792.html
匿名

发表评论

匿名网友

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定