I need to get data from Kafka queue (filled it with my script) to every replica in ClickHouse (CH) cluster.
I've created:
- 'queue' table (Kafka engine) on every replica;
- 'consumer' materialized view (get data from 'queue' to distributed table) on every replica;
- 'data' distributed table;
While I'm putting data into Kafka i pretty sure that tables accept data (simple select count(*) from data), but i always get this:
"Progress: 1.55 thousand rows, 1.24 MB (297.46 rows/s., 237.18 KB/s.) Received exception from server (version 18.14.17): Code: 159. DB::Exception: Received from host:port. DB::Exception: Failed to claim consumer: . 0 rows in set. Elapsed: 5.313 sec. Processed 1.55 thousand rows, 1.24 MB (291.94 rows/s., 232.78 KB/s.)"
When i stop filling Kafka i have a short time window at which i can complete my query. But after a few seconds i receive - 0 counts on every table i have created.