1
votes

Spring Cloud Stream specifies that its version 2.0 is using kafka-clients 1.0 and is compatible with Kafka brokers 1.0, 0.11 (not mentioning 0.10.2 and before).

Kafka specifies that broker 0.10.2 (and even 0.10.1) is compatible with any versions of the Java clients, presumably including kafka-clients 1.0.

So are there any compatibility issues specifically between spring-cloud-stream-binder-kafka 2.0 and 0.10.2 brokers?

I'm planning an upgrade from spring-cloud-stream 1.2 + Kafka 0.10.2 to spring-cloud-stream 2.0 + Kafka 1.0 and am trying to understand if I can do it in one go (clients -> 1.0, then brokers -> 1.0), or otherwise what is the no-downtime upgrade path supported by spring-cloud-stream.

1

1 Answers

3
votes

Yes, it should work ok (I just tested it with the current snapshot).

However, 2.0 uses native kafka headers by default; headers were introduced in Kafka with the 0.11 broker.

You will need to set the producer headerMode to none or embeddedHeaders. (none is a synonym for the deprecated raw from 1.x).

1.x uses embeddedHeaders by default, or raw (none) if so configured.

So, you would need to do this anyway, if you want a 2.0 producer to create messages for a 1.x consumer, regardless of the broker version.

On the consumer side, 2.0 will detect whether the message has native or embedded headers (or none).

Another caveat is you can't set the binder property autoAddPartitions to true unless the broker is at least 1.0.0. The provisioner uses the java AdminClient. 1.x used the scala client which could grow partitions.