I am using Spring Kafka template for producing messages. And the rate at which it is producing the messages is too slow. Takes around 8 mins for producing 15000 messages.
Following is How I created the Kafka template:
@Bean
public ProducerFactory<String, GenericRecord> highSpeedAvroProducerFactory(
@Qualifier("highSpeedProducerProperties") KafkaProperties properties) {
final Map<String, Object> kafkaPropertiesMap = properties.getKafkaPropertiesMap();
System.out.println(kafkaPropertiesMap);
kafkaPropertiesMap.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
kafkaPropertiesMap.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, AvroGenericSerializer.class);
return new DefaultKafkaProducerFactory<>(kafkaPropertiesMap);
}
@Bean
public KafkaTemplate<String, GenericRecord> highSpeedAvroKafkaTemplate(
@Qualifier("highSpeedAvroProducerFactory") ProducerFactory<String, GenericRecord> highSpeedAvroProducerFactory) {
return new KafkaTemplate<>(highSpeedAvroProducerFactory);
}
Here is how I am using the template to send the messages:
@Async("servicingPlatformUpdateExecutor")
public void afterWrite(List<? extends Account> items) {
LOGGER.info("Batch start:{}",items.size());
for (Test test : items) {
if (test.isOmega()) {
ObjectKeyRecord objectKeyRecord = ObjectKeyRecord.newBuilder().setType("test").setId(test.getId()).build();
LOGGER.info("build start, {}",test.getId());
GenericRecord message = MessageUtils.buildEventRecord(
schemaService.findSchema(topicName)
.orElseThrow(() -> new OmegaException("SchemaNotFoundException", topicName)), objectKeyRecord, test);
LOGGER.info("build end, {}",account.getId());
LOGGER.info("send Started , {}",account.getId());
ListenableFuture<SendResult<String, GenericRecord>> future = highSpeedAvroKafkaTemplate.send(topicName, objectKeyRecord.toString(), message);
LOGGER.info("send Done , {}",test.getId());
future.addCallback(new KafkaProducerFutureCallback(kafkaSender, topicName, objectKeyRecord.toString(), message));
}
}
LOGGER.info("Batch end}");
}
Producer Properties:
metric.reporters = []
metadata.max.age.ms = 300000
reconnect.backoff.ms = 50
sasl.kerberos.ticket.renew.window.factor = 0.8
bootstrap.servers = [***VALID BROKERS****))]
ssl.keystore.type = JKS
sasl.mechanism = GSSAPI
max.block.ms = 9223372036854775807
interceptor.classes = null
ssl.truststore.password = null
client.id = producer-1
ssl.endpoint.identification.algorithm = null
request.timeout.ms = 30000
acks = all
receive.buffer.bytes = 32768
ssl.truststore.type = JKS
retries = 2147483647
ssl.truststore.location = null
ssl.keystore.password = null
send.buffer.bytes = 131072
compression.type = none
metadata.fetch.timeout.ms = 60000
retry.backoff.ms = 100
sasl.kerberos.kinit.cmd = /usr/bin/kinit
buffer.memory = 800000000
timeout.ms = 30000
key.serializer = class org.apache.kafka.common.serialization.StringSerializer
sasl.kerberos.service.name = kafka
sasl.kerberos.ticket.renew.jitter = 0.05
ssl.trustmanager.algorithm = PKIX
block.on.buffer.full = false
ssl.key.password = null
sasl.kerberos.min.time.before.relogin = 60000
connections.max.idle.ms = 540000
max.in.flight.requests.per.connection = 10
metrics.num.samples = 2
ssl.protocol = TLS
ssl.provider = null
ssl.enabled.protocols = [TLSv1.2]
batch.size = 40000000
ssl.keystore.location = null
ssl.cipher.suites = null
security.protocol = SASL_SSL
max.request.size = 1048576
value.serializer = class com.message.serialization.AvroGenericSerializer
ssl.keymanager.algorithm = SunX509
metrics.sample.window.ms = 30000
partitioner.class = class org.apache.kafka.clients.producer.internals.DefaultPartitioner
linger.ms = 2
Here is the log which shows call to the kakfatemplate send method takes few milliseconds:
2018-04-27 05:29:05.691 INFO - testservice - - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:70 - build start, 1
2018-04-27 05:29:05.691 INFO - testservice - - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:75 - build end, 1
2018-04-27 05:29:05.691 INFO - testservice - - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:76 - send Started , 1
2018-04-27 05:29:05.778 INFO - testservice - - UpdateExecutor-1 - com.test.testservice.adapter.batch.testsyncjob.UpdateWriteListener:79 - send Done , 1
2018-04-27 05:29:07.794 INFO - testservice - - kafka-producer-network-thread | producer-1 - com.test.testservice.adapter.batch.testsyncjob.KafkaProducerFutureCallback:38
Any suggestion on how can I improve the performance for the sender would be greatly appreciated
Spring Kakfa version: 1.2.3.RELEASE Kafka client: 0.10.2.1
UPDATE 1:
Changed the Serializer to ByteArraySerializer and then produced the same. I still see the each send method call on kafkatempate takes 100 to 200 milliseconds
ObjectKeyRecord objectKeyRecord = ObjectKeyRecord.newBuilder().setType("test").setId(test.getId()).build();
GenericRecord message = MessageUtils.buildEventRecord(
schemaService.findSchema(testConversionTopicName)
.orElseThrow(() -> new TestException("SchemaNotFoundException", testTopicName)), objectKeyRecord, test);
byte[] messageBytes = serializer.serialize(testConversionTopicName,message);
LOGGER.info("send Started , {}",test.getId());
ListenableFuture<SendResult<String, byte[]>> future = highSpeedAvroKafkaTemplate.send(testConversionTopicName, objectKeyRecord.toString(), messageBytes);
LOGGER.info("send Done , {}",test.getId());
future.addCallback(new KafkaProducerFutureCallback(kafkaSender, testConversionTopicName, objectKeyRecord.toString(), message));