0
votes

I want to configure Kafka authentication (just authentication no encryption is needed by now) using 2 listeners:

  • one for interbroker private comunication with PLAINTEXT security
  • one for consumer/producers public communication with SASL_PLAINTEXT and SCRAM-SHA-256

I've one Kafka cluster with just one broker (for testing purposes) and Zookeeper cluster with 2 nodes

The steps I've done are:

  1. Create 'admin' and 'test-user' users on zookeeper
kafka-configs.sh --zookeeper zk:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=test-secret]' \
 --entity-type users --entity-name test-user
kafka-configs.sh --zookeeper zk:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret]' \
 --entity-type users --entity-name admin
  1. configure server properties as follows:
############################# Server Basics #############################
broker.id=1

############################# Socket Server Settings #############################
listeners=EXTERNAL://0.0.0.0:9095,INTERNAL://:9092
advertised.listeners=EXTERNAL://172.20.30.40:9095,INTERNAL://:9092
listener.security.protocol.map=INTERNAL:PLAINTEXT, EXTERNAL:SASL_PLAINTEXT

inter.broker.listener.name=INTERNAL
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256

sasl.enabled.mechanisms=PLAIN, SCRAM-SHA-256


num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
############################# Log Basics #############################
log.dirs=/opt/kafka/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
delete.topic.enable=false
auto.create.topics.enable=true
default.replication.factor=1
############################# Log Flush Policy #############################
#log.flush.interval.messages=10000
#log.flush.interval.ms=1000
############################# Log Retention Policy #############################
log.retention.hours=168
#log.retention.bytes=1073741824
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
log.cleaner.enable=true
############################# Offset Retention #############################
offsets.retention.minutes=1440
############################# Connect Policy #############################
zookeeper.connect=10.42.203.74:2181,10.42.214.116:2181
zookeeper.connection.timeout.ms=6000
  1. create a file kafka_server_jaas.conf and pass it to kafka during boot using -Djava.security.auth.login.config=/opt/kafka/config/kafka_server_jaas.conf
internal.KafkaServer {

   org.apache.kafka.common.security.plain.PlainLoginModule required
   username="admin"
   password="admin-secret";
};


external.KafkaServer {

   org.apache.kafka.common.security.scram.ScramLoginModule required;
};
  1. create a test-topic to publish/subscribe
kafka-topics.sh --create --zookeeper zk:2181 --replication-factor 1 --partitions 3 --topic test-topic
  1. create a client-secure.properties file to publish using test-user and its credentials:
sasl.jaas.config=org.apache.kafka.common.security.scram.ScramLoginModule required \
      username="test-user" \
      password="test-secret";
security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256
  1. and finally try publishing using EXTERNAL listener to the 'test-topic' previously created authenticating using 'test-user'
kafka-console-producer.sh --broker-list 172.20.30.40:9095 --topic test-topic 
--producer.config client-secure.properties

and I always get the following error:

ERROR [Producer clientId=console-producer] Connection to node -1 failed authentication due to: 
Client SASL mechanism 'SCRAM-SHA-256' not enabled in the server, enabled mechanisms are [PLAIN] 
(org.apache.kafka.clients.NetworkClient)

why SCRAM-SHA-256 mechanism is not enabled on server? shouldn't it be enabled with 'sasl.enabled.mechanisms=PLAIN, SCRAM-SHA-256' property on 'server.properties' file and with scram config on external listener configuration defined on 'kafka_server_jaas.conf' file?

I've already spent 2 days in a row fighting with this applying different configurations without any success. Any help would be very very appreciate

Thanks in advance

1

1 Answers

0
votes

After days struggling with it I've found the solution.

I didn't mention in the post that I'm running KAFKA as container in Rancher and the port 9095 of the EXTERNAL listener was not mapped in Rancher so wasn't in the docker container either.

port mapping in Racher admin console

Although I was doing the tests from inside the container if the port of the listener where you're publishing/subcribing is not mapped, it doesn't work.