3
votes

I have a Kafka Connect source and sink connector for putting data into Kafka and taking it back out.

I am running Kafka and Kafka Connect using docker-compose which runs connect in distributed mode. see that it finds my plugin when connect starts up, but it doesn't actually do anything unless I do a POST to the /connectors API, including the configuration in JSON.

I have a properties file with the configuration in it and I've tried putting it under /etc where I find similar properties files for the other plugins that are installed.

Am I missing a step when installing my plugin, or is it required to register the connector via the REST API before it will be assigned to workers?

2

2 Answers

1
votes

Yes, you have to configure Kafka Connect using the REST API when using distributed mode.

It's possible to script the creation of connectors though, using a Docker Compose like this:

  command: 
    - bash 
    - -c 
    - |
      /etc/confluent/docker/run & 
      echo "Waiting for Kafka Connect to start listening on kafka-connect ⏳"
      while [ $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) -eq 000 ] ; do 
        echo -e $$(date) " Kafka Connect listener HTTP state: " $$(curl -s -o /dev/null -w %{http_code} http://kafka-connect:8083/connectors) " (waiting for 200)"
        sleep 5 
      done
      nc -vz kafka-connect 8083
      echo -e "\n--\n+> Creating Kafka Connect Elasticsearch sink"
      /scripts/create-es-sink.sh 
      sleep infinity

where /scripts/create-es-sink.sh is the REST call from curl in a file mounted locally to the container.

(source)

0
votes

You can install a Kafka connector before you start the distributed Connect worker using "confluent-hub install" as shown here: Install Kafka connector manually). However, I'm not sure what the magic is if you aren't using confluent-hub though.