1
votes

Is there a way to log all spring-kafka configurations in logs using log4j? I have tried using the following logger configurations in my application log4j2.yml, I could see all info and debug logs but not configurations. - name: org.springframework.kafka additivity: false level: info/debug AppenderRef: - ref: SOME_APPENDER Thanks in advance for helping.

2

2 Answers

1
votes

I think you really need to worry only about properties applied to the Apache Kafka Client. That will cover Spring Kafka configuration, too. For this purpose you only need to configure INFO for org.apache.kafka.clients category.

And this is a log from one of my tests:

16:55:20.616 INFO  [main][org.apache.kafka.clients.consumer.ConsumerConfig] ConsumerConfig values: 
    auto.commit.interval.ms = 10
    auto.offset.reset = earliest
    bootstrap.servers = [127.0.0.1:56505]
    check.crcs = true
    client.id = 
    connections.max.idle.ms = 540000
    enable.auto.commit = false
    exclude.internal.topics = true
    fetch.max.bytes = 52428800
    fetch.max.wait.ms = 500
    fetch.min.bytes = 1
    group.id = blc
    heartbeat.interval.ms = 3000
    interceptor.classes = null
    internal.leave.group.on.close = true
    isolation.level = read_uncommitted
    key.deserializer = class org.apache.kafka.common.serialization.IntegerDeserializer
    max.partition.fetch.bytes = 1048576
    max.poll.interval.ms = 300000
    max.poll.records = 500
    metadata.max.age.ms = 300000
    metric.reporters = []
    metrics.num.samples = 2
    metrics.recording.level = INFO
    metrics.sample.window.ms = 30000
    partition.assignment.strategy = [class org.apache.kafka.clients.consumer.RangeAssignor]
    receive.buffer.bytes = 65536
    reconnect.backoff.max.ms = 1000
    reconnect.backoff.ms = 50
    request.timeout.ms = 305000
    retry.backoff.ms = 100
    sasl.jaas.config = null
    sasl.kerberos.kinit.cmd = /usr/bin/kinit
    sasl.kerberos.min.time.before.relogin = 60000
    sasl.kerberos.service.name = null
    sasl.kerberos.ticket.renew.jitter = 0.05
    sasl.kerberos.ticket.renew.window.factor = 0.8
    sasl.mechanism = GSSAPI
    security.protocol = PLAINTEXT
    send.buffer.bytes = 131072
    session.timeout.ms = 60000
    ssl.cipher.suites = null
    ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
    ssl.endpoint.identification.algorithm = null
    ssl.key.password = null
    ssl.keymanager.algorithm = SunX509
    ssl.keystore.location = null
    ssl.keystore.password = null
    ssl.keystore.type = JKS
    ssl.protocol = TLS
    ssl.provider = null
    ssl.secure.random.implementation = null
    ssl.trustmanager.algorithm = PKIX
    ssl.truststore.location = null
    ssl.truststore.password = null
    ssl.truststore.type = JKS
    value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
0
votes

This works fine for me...

Configutation:
  name: Default

  Properties:
    Property:
      name: log-path
      value: "logs"

  Appenders:

    Console:
      name: Console_Appender
      target: SYSTEM_OUT
      PatternLayout:
        pattern: "[%-5level] %d{yyyy-MM-dd HH:mm:ss.SSS} [%t] %c{1} - %msg%n"

  Loggers:

      Root:
        level: warn
        AppenderRef:
          - ref: Console_Appender

      Logger:
        - name: org.apache.kafka
          level: info
          AppenderRef:
            - ref: Console_Appender

and

[INFO ] 2017-12-20 17:00:08.591 [main] ProducerConfig - ProducerConfig values: 
    acks = 1
    batch.size = 16384
    bootstrap.servers = [localhost:9092]
    buffer.memory = 33554432
    ...