9
votes

I have been trying to deploy Kafka with schema registry locally using Kubernetes. However, the logs of the schema registry pod show this error message:

ERROR Server died unexpectedly:  (io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain:51)
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

What could be the reason of this behavior? ' In order to run Kubernetes locally, I user Minikube version v0.32.0 with Kubernetes version v1.13.0

My Kafka configuration:

apiVersion: v1
kind: Service
metadata:
  name: kafka-1
spec:
  ports:
    - name: client
      port: 9092
  selector:
    app: kafka
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-1
spec:
  selector:
    matchLabels:
      app: kafka
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: kafka
        server-id: "1"
    spec:
      volumes:
        - name: kafka-data
          emptyDir: {}
      containers:
        - name: server
          image: confluent/kafka:0.10.0.0-cp1
          env:
            - name: KAFKA_ZOOKEEPER_CONNECT
              value: zookeeper-1:2181
            - name: KAFKA_ADVERTISED_HOST_NAME
              value: kafka-1
            - name: KAFKA_BROKER_ID
              value: "1"
          ports:
            - containerPort: 9092
          volumeMounts:
            - mountPath: /var/lib/kafka
              name: kafka-data
---
apiVersion: v1
kind: Service
metadata:
  name: schema
spec:
  ports:
    - name: client
      port: 8081
  selector:
    app: kafka-schema-registry
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: kafka-schema-registry
spec:
  replicas: 1
  selector:
    matchLabels:
      app: kafka-schema-registry
  template:
    metadata:
      labels:
        app: kafka-schema-registry
    spec:
      containers:
        - name: kafka-schema-registry
          image: confluent/schema-registry:3.0.0
          env:
            - name: SR_KAFKASTORE_CONNECTION_URL
              value: zookeeper-1:2181
            - name: SR_KAFKASTORE_TOPIC
              value: "_schema_registry"
            - name: SR_LISTENERS
              value: "http://0.0.0.0:8081"
          ports:
            - containerPort: 8081

Zookeeper configuraion:

apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
    - name: client
      port: 2181
  selector:
    app: zookeeper
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper-1
spec:
  ports:
    - name: client
      port: 2181
    - name: followers
      port: 2888
    - name: election
      port: 3888
  selector:
    app: zookeeper
    server-id: "1"
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: zookeeper-1
spec:
  selector:
    matchLabels:
      app: zookeeper
      server-id: "1"
  replicas: 1
  template:
    metadata:
      labels:
        app: zookeeper
        server-id: "1"
    spec:
      volumes:
        - name: data
          emptyDir: {}
        - name: wal
          emptyDir:
            medium: Memory
      containers:
        - name: server
          image: elevy/zookeeper:v3.4.7
          env:
            - name: MYID
              value: "1"
            - name: SERVERS
              value: "zookeeper-1"
            - name: JVMFLAGS
              value: "-Xmx2G"
          ports:
            - containerPort: 2181
            - containerPort: 2888
            - containerPort: 3888
          volumeMounts:
            - mountPath: /zookeeper/data
              name: data
            - mountPath: /zookeeper/wal
              name: wal
5
By the way, the confluent/ Docker images are deprecated. And confluentinc/ are preffered. And mentioned previously, are you having issues using Helm charts? docs.confluent.io/current/installation/installing_cp/…OneCricketeer
I don't have issues with Helm charts. I need to deploy custom Kafka solutions without Helm, that is why I am trying to do soCassie
I'm not seeing anything that looks very custom, though. Kafka is really only installed in one way, and maybe the config values are changed a bit, but any custom apps built around Kafka+Schema Registry, can be defined in separate YAML filesOneCricketeer

5 Answers

12
votes
org.apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata

can happen when trying to connect to a broker expecting SSL connections and the client config did not specify;

security.protocol=SSL 
3
votes

One time I fixed this issue by restarting my machine but it happened again and I didn't want to restart my machine, so I fixed it with this property in the server.properties file

advertised.listeners=PLAINTEXT://localhost:9092
1
votes

Kafka fetch topics metadata fails due to 2 reasons:

Reason 1 If the bootstrap server is not accepting your connections this can be due to some proxy issue like a VPN or some server level security groups.

Reason 2: Mismatch in security protocol where the expected can be SASL_SSL and the actual can be SSL. or the reverse or it can be PLAIN.

0
votes

For others who might face this issue, it may happen because topics are not created on the kafka broker machine. So ensure to create appropriate Topics on server as mentioned in your codebase.

0
votes

I have faced the same issue even though all the SSL config, topics are created. After long research, I have enabled the spring debug logs. The internal error is org.springframework.jdbc.CannotGetJdbcConnectionException. When I checked in other thread, they said about Spring Boot and Kafka dependency mismatch can cause the Timeout exception. So I have upgraded Spring Boot from 2.1.3 to 2.2.4. Now there is no error and kafka connection is successful. Might be useful to someone.