3
votes

Error after updates

[2019-07-29 12:52:23,301] INFO Initializing writer using SQL dialect: PostgreSqlDatabaseDialect (io.confluent.connect.jdbc.sink.JdbcSinkTask:57)
[2019-07-29 12:52:23,303] INFO WorkerSinkTask{id=sink-postgres-0} Sink task finished initialization and start (org.apache.kafka.connect.runtime.WorkerSinkTask:301)
[2019-07-29 12:52:23,367] WARN [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Error while fetching metadata with correlation id 2 : {kafkadad=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient:1023)
[2019-07-29 12:52:23,368] INFO Cluster ID: _gRuX5-0SUu72wzy6PV0Ag (org.apache.kafka.clients.Metadata:365)
[2019-07-29 12:52:23,369] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Discovered group coordinator INTRIVMPIOT01.xpetize.local:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:675)
[2019-07-29 12:52:23,372] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Revoking previously assigned partitions [] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:459)
[2019-07-29 12:52:23,373] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-07-29 12:52:23,383] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] (Re-)joining group (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:491)
[2019-07-29 12:52:23,482] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Successfully joined group with generation 1 (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:455)
[2019-07-29 12:52:23,486] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Setting newly assigned partitions: kafkadad-0 (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator:290)
[2019-07-29 12:52:23,501] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Resetting offset for partition kafkadad-0 to offset 0. (org.apache.kafka.clients.consumer.internals.Fetcher:584)
[2019-07-29 12:52:35,338] ERROR WorkerSinkTask{id=sink-postgres-0} Task threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:177)
org.apache.kafka.connect.errors.ConnectException: Tolerance exceeded in error handler
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:178)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execute(RetryWithToleranceOperator.java:104)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertAndTransformRecord(WorkerSinkTask.java:487)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:464)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:320)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:224)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:192)
        at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:175)
        at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:219)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
        at java.util.concurrent.FutureTask.run(FutureTask.java:266)
        at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
        at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.NullPointerException
        at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:701)
        at org.apache.kafka.connect.json.JsonConverter.access$000(JsonConverter.java:61)
        at org.apache.kafka.connect.json.JsonConverter$12.convert(JsonConverter.java:181)
        at org.apache.kafka.connect.json.JsonConverter.convertToConnect(JsonConverter.java:745)
        at org.apache.kafka.connect.json.JsonConverter.toConnectData(JsonConverter.java:363)
        at org.apache.kafka.connect.runtime.WorkerSinkTask.lambda$convertAndTransformRecord$1(WorkerSinkTask.java:487)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndRetry(RetryWithToleranceOperator.java:128)
        at org.apache.kafka.connect.runtime.errors.RetryWithToleranceOperator.execAndHandleError(RetryWithToleranceOperator.java:162)
        ... 13 more
[2019-07-29 12:52:35,347] ERROR WorkerSinkTask{id=sink-postgres-0} Task is being killed and will not recover until manually restarted (org.apache.kafka.connect.runtime.WorkerTask:178)
[2019-07-29 12:52:35,347] INFO Stopping task (io.confluent.connect.jdbc.sink.JdbcSinkTask:105)
[2019-07-29 12:52:35,349] INFO [Consumer clientId=consumer-1, groupId=connect-sink-postgres] Member consumer-1-bdbc7035-7625-4701-9ca7-c1ffa6863456 sending LeaveGroup request to coordinator INTRIVMPIOT01.xpetize.local:9092 (id: 2147483647 rack: null) (org.apache.kafka.clients.consumer.internals.AbstractCoordinator:822)

Producer console: enter image description here

connect-standalone.properties file

bootstrap.servers=localhost:9092 
key.converter=org.apache.kafka.connect.json.JsonConverter 
value.converter=org.apache.kafka.connect.json.JsonConverter 
key.converter.schemas.enable=false 
value.converter.schemas.enable=true

offset.storage.file.filename=/tmp/connect.offsets 
offset.flush.interval.ms=10000
plugin.path=/home/kafka/confluent-5.2.1/share/java

connect-post.properties file

name=sink-postgres
connector.class=io.confluent.connect.jdbc.JdbcSinkConnector
tasks.max=2
topics=kafkada
connection.url=jdbc:postgresql://localhost:5432/kafkadb?
user=postgres&password=postgres
insert.mode=upsert
table.name.format=kafkatable
pk.mode=none
pk.fields=none
auto.create=true 
auto.evolve=false
offset.storage.file.filename=/tmp/post-sink.offsets

The above error is caused when I did ./bin/connect-standalone.sh config/connect-standalone.properties config.postgresql.properties through apache kafka.

Then, I have tried and achieved the flow mentioned in this link:

https://hellokoding.com/kafka-connect-sinks-data-to-postgres-example-with-avro-schema-registry-and-python

But, here the data is being generated from Python code using avro. But in my case, I already have data coming from sensors(in JSON format) in kafka topic which I want to send to postgreSQL, instead of generating data through code.

So, How can I achieve this flow of sending data from kafka topic to postgreSQL.

I have shared my properties file Please let me know if corrrection is required. I am sending simple json data like "{"cust_id": 1313131, "month": 12, "expenses": 1313.13}" and I also tried sending this type of data but still error exists

sample json data

 {
        "schema": {
            "type": "struct",
            "fields": [
                {
                    "type": "int32",
                    "optional": false,
                    "field": "customer_id"
                },
                {
                    "type": "int32",
                    "optional": true,
                    "field": "month"
                },

                {
                    "type": "string",
                    "optional": true,
                    "field": "amount_paid"
                }
            ],
            "optional": false,
            "name": "msgschema"
        },
        "payload": {
           "cust_id": 13, 
           "month": 12, 
           "expenses": 1313.13
        }
    }

and I have a table called kafkatable which has column names as (customer_id, month, amount_paid) created using

"CREATE TABLE kafkatable( customer_id int8, month int4, amount_paid decimal(9,2) );"

2
Can you show example of your messages (with console consumer)? And the python code as well? The error seems to suggest you don't have complete JSON payloads - OneCricketeer
Hi @cricket_007 as of now, I am not using any python code, have a look at the properties file that I have added and through console producer I am sending the sample json data that I have shared in question. - Nishad Nazar
Can you please show the console producer command? If you're sending that file directly using cat and a pipe, then it cannot be multiple lines... And if you already have existing data on the topic like that, then you'll still get errors, so you need a new topic - OneCricketeer
@cricket_007 Yes, I always create the new topic before running the producer command or the other connect-standalone command. So I have posted the new image of the producer with the message I'm sending. (Also, please tell me, If Im sending message like this with schema, then I don't need to create a table in the postgresql manually right?, this scheam from producer auto creates it?) Thank you so much for your replies... :) Please check the updated error images.. - Nishad Nazar
hi @cricket_007, Plz check out the updated images. - Nishad Nazar

2 Answers

2
votes

I solved this error by making following changes

  1. insert.mode=insert
  2. Comment out the table.name.format=kafkatable because table will be created through auto create
  3. Remove the question mark from the end of connection.url line.
  4. pk.fields should not be kept none here, please make sure to give a column name instead to avoid complications.
  5. int32 is not supported by postgresql, so when I changed it to int8 it is working fine.
  6. The fields in your schema and payload have different names, please make sure to give same name.
1
votes

Kafka Connect, which is part of Apache Kafka, is perfectly suited to this. You can learn more about Kafka Connect in general here.

To stream data from your Kafka topic to Postgres (or any other database) use the JDBC Sink connector, which you can get from here.