2
votes

I am using Spring for Apache Kafka and have created a service that uses a Kafka Producer (org.apache.kafka.clients.producer) via Spring's KafkaTemplate to send messages to a topic. On the target Kafka cluster I have disabled auto topic creation. Using a combination of producer configurations listed here https://kafka.apache.org/documentation/#producerconfigs I am successfully controlling how many times a request is retried, time between retries, etc.

If I provide a topic that does not exist the request times out when I expect it to (upon reaching the value of max.block.ms). However, after the timeout I continue to get log entries (such as the one below) at the interval set for retry.backoff.ms until 300000 ms / 5 minutes has been reached.

I've been unable to determine which configuration property on the producer or the brokers can be changed to stop the producer from checking for 5 minutes to see if the topic has been created.

Can someone point me to the correct setting that will allow me to reduce this or have it stop checking once the request has timed out?

Log Entry Example:

WARN  [kafka-producer-network-thread | producer-1] org.apache.kafka.clients.NetworkClient$DefaultMetadataUpdater: [Producer clientId=producer-1] Error while fetching metadata with correlation id 9 : {<specified_topic>=UNKNOWN_TOPIC_OR_PARTITION}

Producer Configs Used:

  • delivery.timeout.ms = 5000
  • linger.ms = 1000
  • max.block.ms = 8000
  • request.timeout.ms= 4000
  • max.retry.count = 0
  • retry.backoff.ms = 2000
1
UNKNOWN_TOPIC_OR_PARTITION... Please describe this topic to prove it exists. If it doesn't exist, why are you trying to produce to it? Why should the producer stop trying to send to it?OneCricketeer
Can you please show your configurations which you have provided to producer and broker?Fatema Khuzaima Sagar
@FatemaSagar Updated the postHendPro12
@cricket_007 I am developing an enterprise service which will recieve one or more messages via http request from various clients and send those to a kafka cluster and topic provided in the requests. Clients will not have the ability to create new topics and they may accidentally pass an invalid/non-existent topic in their requests to this service.HendPro12
What client version are you using?mazaneicha

1 Answers

3
votes

Kafka Producer retrieves and caches topic/partition metadata before first send. It then periodically tries to refresh this metadata, every metadata.max.age.ms (default=5minutes) for "good" and every retry.backoff.ms for "invalid" topics. These metadata refresh attempts is what you're observing in the log.

To prevent cache from growing uncontrollably, unused topics are dropped from it after certain period of time according to these source comments. Currently, this expiry period is hardcoded in ProducerMetadata.java to be 5 minutes.

  public class ProducerMetadata extends Metadata {
      private static final long TOPIC_EXPIRY_NEEDS_UPDATE = -1L;
      static final long TOPIC_EXPIRY_MS = 5 * 60 * 1000;
        ...

You can actually observe all this activity by setting producer log level to DEBUG.