40
votes

I am doing the Kafka Quickstart for Kafka 0.9.0.0.

I have zookeeper listening at localhost:2181 because I ran

bin/zookeeper-server-start.sh config/zookeeper.properties

I have a single broker listening at localhost:9092 because I ran

bin/kafka-server-start.sh config/server.properties

I have a producer posting to topic "test" because I ran

bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test
yello
is this thing on?
let's try another
gimme more

When I run the old API consumer, it works by running

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic test --from-beginning

However, when I run the new API consumer, I don't get anything when I run

bin/kafka-console-consumer.sh --new-consumer --topic test --from-beginning \
    --bootstrap-server localhost:9092

Is it possible to subscribe to a topic from the console consumer using the new api? How can I fix it?

16
What scala version are you using? Did you compile compile kafka? I had a couple of minor issue with kafka_2.10-0.9.0.0.tgz but with kafka_2.101-0.9.0.0.tgz it works like a charm, your example included.vlain
Ok thanks, this was with 2.10. If I try again it will be with 2.11.EthanP
did you create 'test' topic?Hossein Vatani

16 Answers

47
votes

I my MAC box I was facing the same issue of console-consumer not consuming any messages when used the command

kafka-console-consumer --bootstrap-server localhost:9095 --from-beginning --topic my-replicated-topic

But when I tried with

kafka-console-consumer --bootstrap-server localhost:9095 --from-beginning --topic my-replicated-topic --partition 0

It happily lists the messages sent. Is this a bug in Kafka 1.10.11?

11
votes

I just ran into this issue and the solution was to delete /brokers in zookeeper and restart the kafka nodes.

bin/zookeeper-shell <zk-host>:2181

and then

rmr /brokers

Not sure why this solves it.

When I enabled debug logging, I saw this error message over and over again in the consumer:

2017-07-07 01:20:12 DEBUG AbstractCoordinator:548 - Sending GroupCoordinator request for group test to broker xx.xx.xx.xx:9092 (id: 1007 rack: null) 2017-07-07 01:20:12 DEBUG AbstractCoordinator:559 - Received GroupCoordinator response ClientResponse(receivedTimeMs=1499390412231, latencyMs=84, disconnected=false, requestHeader={api_key=10,api_version=0,correlation_id=13,client_id=consumer-1}, responseBody={error_code=15,coordinator={node_id=-1,host=,port=-1}}) for group test 2017-07-07 01:20:12 DEBUG AbstractCoordinator:581 - Group coordinator lookup for group test failed: The group coordinator is not available. 2017-07-07 01:20:12 DEBUG AbstractCoordinator:215 - Coordinator discovery failed for group test, refreshing metadata

8
votes

For me the solution described in this thread worked - https://stackoverflow.com/a/51540528/7568227

Check if

offsets.topic.replication.factor

(or probably other config parameters related to replication) is not higher than the number of brokers. That was the problem in my case.

There was no need to use --partition 0 anymore after this fix.

Otherwise I recommend to follow the debugging procedure described in the mentioned thread.

5
votes

Was getting the same issue on my Mac. I checked the logs and found the following error.

Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). 
This error can be ignored if the cluster is starting up and not all brokers are up yet.

This can be fixed by changing the replication factor to 1. Add the following line in server.properties and restart Kafka/Zookeeper.

offsets.topic.replication.factor=1
5
votes

In my case, this doesn't work

kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic

and this works

kafka-console-consumer --bootstrap-server localhost:9092 --from-beginning --topic my-replicated-topic --partition 0

because the topic __consumer_offsets was located on the unaccessible broker. Basically, I'd forgotten to replicate it. Relocating __consumer_offsets solved my issue.

2
votes

I got the same problem, now I have figured out.

When you use --zookeeper, it is supposed to be provided with zookeeper address as parameter.

When you use --bootstrap-server, it is supposed to be provided with broker address as parameter.

2
votes

Your localhost is the foo here. if you replace the localhost word for the actual hostname, it should work.

like this:

producer

./bin/kafka-console-producer.sh --broker-list \
sandbox-hdp.hortonworks.com:9092 --topic test

consumer:

./bin/kafka-console-consumer.sh --topic test --from-beginning \    
--bootstrap-server bin/kafka-console-consumer.sh --new-consumer \
--topic test --from-beginning \
--bootstrap-server localhost:9092
2
votes

This problem also impacts ingesting data from the kafka using flume and sink the data to HDFS.

To fix the above issue:

  1. Stop Kafka brokers
  2. Connect to zookeeper cluster and remove /brokers z node
  3. Restart kafka brokers

There is no issue with respect to kafka client version and scala version that we are using the cluster. Zookeeper might have wrong information about broker hosts.

To verify the action:

Create topic in kafka.

$ kafka-console-consumer --bootstrap-server slavenode01.cdh.com:9092 --topic rkkrishnaa3210 --from-beginning

Open a producer channel and feed some messages to it.

$ kafka-console-producer --broker-list slavenode03.cdh.com:9092 --topic rkkrishnaa3210

Open a consumer channel to consume the message from a specific topic.

$ kafka-console-consumer --bootstrap-server slavenode01.cdh.com:9092 --topic rkkrishnaa3210 --from-beginning

To test this from flume:

Flume agent config:

rk.sources  = source1
rk.channels = channel1
rk.sinks = sink1

rk.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
rk.sources.source1.zookeeperConnect = ip-20-0-21-161.ec2.internal:2181
rk.sources.source1.topic = rkkrishnaa321
rk.sources.source1.groupId = flume1
rk.sources.source1.channels = channel1
rk.sources.source1.interceptors = i1
rk.sources.source1.interceptors.i1.type = timestamp
rk.sources.source1.kafka.consumer.timeout.ms = 100
rk.channels.channel1.type = memory
rk.channels.channel1.capacity = 10000
rk.channels.channel1.transactionCapacity = 1000
rk.sinks.sink1.type = hdfs
rk.sinks.sink1.hdfs.path = /user/ce_rk/kafka/%{topic}/%y-%m-%d
rk.sinks.sink1.hdfs.rollInterval = 5
rk.sinks.sink1.hdfs.rollSize = 0
rk.sinks.sink1.hdfs.rollCount = 0
rk.sinks.sink1.hdfs.fileType = DataStream
rk.sinks.sink1.channel = channel1

Run flume agent:

flume-ng agent --conf . -f flume.conf -Dflume.root.logger=DEBUG,console -n rk

Observe logs from the consumer that the message from the topic is written in HDFS.

18/02/16 05:21:14 INFO internals.AbstractCoordinator: Successfully joined group flume1 with generation 1
18/02/16 05:21:14 INFO internals.ConsumerCoordinator: Setting newly assigned partitions [rkkrishnaa3210-0] for group flume1
18/02/16 05:21:14 INFO kafka.SourceRebalanceListener: topic rkkrishnaa3210 - partition 0 assigned.
18/02/16 05:21:14 INFO kafka.KafkaSource: Kafka source source1 started.
18/02/16 05:21:14 INFO instrumentation.MonitoredCounterGroup: Monitored counter group for type: SOURCE, name: source1: Successfully registered new MBean.
18/02/16 05:21:14 INFO instrumentation.MonitoredCounterGroup: Component type: SOURCE, name: source1 started
18/02/16 05:21:41 INFO hdfs.HDFSDataStream: Serializer = TEXT, UseRawLocalFileSystem = false
18/02/16 05:21:42 INFO hdfs.BucketWriter: Creating /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp
18/02/16 05:21:48 INFO hdfs.BucketWriter: Closing /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp
18/02/16 05:21:48 INFO hdfs.BucketWriter: Renaming /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920.tmp to /user/ce_rk/kafka/rkkrishnaa3210/18-02-16/FlumeData.1518758501920
18/02/16 05:21:48 INFO hdfs.HDFSEventSink: Writer callback called.
1
votes

Use this:

$ bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning

Note: Remove --new-consumer from your command

For reference see here: https://kafka.apache.org/quickstart

0
votes

Can you please try like this:

bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic my-replicated-topic
0
votes

In my case it didn't worked using either approaches then I also increased the log level to DEBUG at config/log4j.properties, started the console consumer

./bin/kafka-console-consumer.sh --bootstrap-server 127.0.0.1:9092 --from-beginning --topic MY_TOPIC

Then got the log below

[2018-03-11 12:11:25,711] DEBUG [MetadataCache brokerId=10] Error while fetching metadata for MY_TOPIC-3: leader not available (kafka.server.MetadataCache)

The point here is that I have two kafka nodes but one is down, by some reason by default kafka-console consumer will not consume if there is some partition not available because the node is down (the partition 3 in that case). It doesn't happen in my application.

Possible solutions are

  • Startup the down brokers
  • Delete the topic and create it again that way all partitions will be placed at the online broker node
0
votes

Run the below command from bin:

./kafka-console-consumer.sh --topic test --from-beginning --bootstrap-server localhost:9092

"test" is the topic name

0
votes

I had this problem that consumer finished executing in kafka_2.12-2.3.0.tgz.

Tried debugging but no logs were printed.

Try running fine with kafka_2.12-2.2.2 .Works fine.

And try running the zookeeper and kafka from the quickstart guide!

-1
votes

In my case, broker.id=1 in server.properties was problem.

This should be broker.id=0 when you use only one kafka server for development.

Don't forget remove all logs and restart zookeper and kafka

  • Remove /tmp/kafka-logs (defined in server.properties file)
  • Remove [your_kafka_home]/logs
  • Restart Zookeper and Kafka
-3
votes

In kafka_2.11-0.11.0.0 the zookeeper server is deprecated and and it is using bootstrap-server, and it will take broker ip address and port. If you give correct broker parameters you will be able to consume messages.

e.g. $ bin/kafka-console-consumer.sh --bootstrap-server :9093 --topic test --from-beginning

I'm using port 9093, for you it may vary.

regards.

-4
votes

replication factor must be at least 3

./bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic test