We have a scenario in our Storm topology where the KafkaSpouts are unable to consume any messages from the topics. The Spout continuously logs the same WARN message:
Got fetch request with offset out of range
...
2016-10-26 11:11:31.070 o.a.s.k.KafkaUtils [WARN] Partition{host=somehost.org:9092, topic=my-topic, partition=0} Got fetch request with offset out of range: [3]
2016-10-26 11:11:31.078 o.a.s.k.KafkaUtils [WARN] Partition{host=somehost.org:9092, topic=my-topic, partition=0} Got fetch request with offset out of range: [3]
2016-10-26 11:11:31.084 o.a.s.k.KafkaUtils [WARN] Partition{host=somehost.org:9092, topic=my-topic, partition=0} Got fetch request with offset out of range: [3]
2016-10-26 11:11:31.098 o.a.s.k.KafkaUtils [WARN] Partition{host=somehost.org:9092, topic=my-topic, partition=0} Got fetch request with offset out of range: [3]
2016-10-26 11:11:31.104 o.a.s.k.KafkaUtils [WARN] Partition{host=somehost.org:9092, topic=my-topic, partition=0} Got fetch request with offset out of range: [3]
2016-10-26 11:11:31.111 o.a.s.k.KafkaUtils [WARN] Partition{host=somehost.org:9092, topic=my-topic, partition=0} Got fetch request with offset out of range: [3]
...
The Spout is configured to read the last commit offset from zookeeper and that offset in this scenario is greater then the newest message offset in Kafka. We are also looking into why the topics offsets are resetting.
Currently we resolve the issue by observing the out of range warning in the storm logs, delete the zookeeper offset entry and then re-deploy the topology.