kafka cluster configuration -
i'm new on kafka. have question kafka configuration.
i want using seperate server below,
server1: kafka producer server2: kafka broker, kafka consumer, zookeeper
but, can't send message broker. , got error messages.
on console-producer(server1), console stdout error message `
[2016-05-24 16:41:11,823] error error when sending message topic twitter key: null, value: 3 bytes error: failed update metadata after 60000 ms.(org.apache.kafka.clients.producer.internals.errorloggingcallback)
`
on kafka producer(server2), console stdout error message `
[2016-05-25 10:20:01,588] debug connection /192.168.50.142 disconnected (org.apache.kafka.common.network.selector) java.io.eofexception @ org.apache.kafka.common.network.networkreceive.readfromreadablechannel(networkreceive.java:83) @ org.apache.kafka.common.network.networkreceive.readfrom(networkreceive.java:71) @ org.apache.kafka.common.network.kafkachannel.receive(kafkachannel.java:160) @ org.apache.kafka.common.network.kafkachannel.read(kafkachannel.java:141) @ org.apache.kafka.common.network.selector.poll(selector.java:286) @ kafka.network.processor.run(socketserver.scala:413) @ java.lang.thread.run(thread.java:745)
`
running commands below
server1 on kafka dir, `
./bin/zookeeper-server-start.sh config/zookeeper.properties ./bin/kafka-server-start.sh config/server.properties ./bin/kafka-console-consumer.sh --zookeeper 192.168.50.142:2181 --from-beginning --topic twitter ./bin/kafka-topics.sh --create --zookeeper 192.168.50.142:2181 --replication-factor 1 --partitions 1 --topic twitter
`
and server2 on kafka dir, `
./bin/kafka-console-producer.sh --broker-list 192.168.50.142:9092 --topic twitter
`
and configuration are,
server1(ip: 192.168.50.155):
kafka/config/producer.properties `
metadata.broker.list=192.168.50.142:9092 producer.type=sync compression.codec=none serializer.class=kafka.serializer.defaultencoder
`
server2(ip:192.168.50.142):
kafka/config/zookeeper.properties `
datadir=/tmp/zookeeper clientport=2181 maxclientcnxns=0
`
kafka/config/server.properties `
broker.id=0 listeners=plaintext://0.0.0.0:9092 num.io.threads=8 socket.send.buffer.bytes=102400 socket.receive.buffer.bytes=102400 socket.request.max.bytes=104857600 log.dirs=/tmp/kafka-logs num.partitions=1 num.recovery.threads.per.data.dir=1 log.retention.hours=168 log.segment.bytes=1073741824 log.retention.check.interval.ms=300000 log.cleaner.enable=false zookeeper.connect=localhost:2181 zookeeper.connection.timeout.ms=6000 broker.id=0 port=9092 log.dir=/tmp/kafka-logs-1 delete.topic.enable=true
`
kafka.config/consumer.properties `
zookeeper.connect=127.0.0.1:2181 zookeeper.connection.timeout.ms=6000 group.id=test-consumer-group
`
kafka_2.11-0.9.0.0 java 1.8.0_60 node v4.4.4
should need change configuration? please give help.
it seems producer configurations not correct.
kafka/config/producer.properties
bootstrap.servers=192.168.50.142:9092 key.serializer=org.apache.kafka.common.serialization.stringserializer value.serializer=org.apache.kafka.common.serialization.stringserializer
Comments
Post a Comment