I have a problem with a Spring cloud stream application that is using the KStream component. It is listening to one input and directing messages to one output after processing them.
It is expecting a JSON string to come in and tries to convert it to a Spring Tuple on arrival. Reverse of this happens when sending the message out.
Problem is that when a Sysadmin wants to test a topic with a kafka-console-producer.sh
for instance... and prints a string
"lol"
in it, the whole Spring cloud stream application will die right there with the following exception:
java.lang.RuntimeException: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'lol': was expecting ('true', 'false' or 'null')
at [Source: lol; line: 1, column: 7]
at org.springframework.tuple.JsonStringToTupleConverter.convert(JsonStringToTupleConverter.java:71) ~[spring-tuple-1.0.0.RELEASE.jar:na]
at org.springframework.tuple.JsonStringToTupleConverter.convert(JsonStringToTupleConverter.java:31) ~[spring-tuple-1.0.0.RELEASE.jar:na]
at org.springframework.tuple.TupleBuilder.fromString(TupleBuilder.java:153) ~[spring-tuple-1.0.0.RELEASE.jar:na]
at org.springframework.cloud.stream.converter.TupleJsonMessageConverter.convertFromInternal(TupleJsonMessageConverter.java:90) ~[spring-cloud-stream-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.springframework.messaging.converter.AbstractMessageConverter.fromMessage(AbstractMessageConverter.java:175) ~[spring-messaging-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.messaging.converter.AbstractMessageConverter.fromMessage(AbstractMessageConverter.java:167) ~[spring-messaging-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.messaging.converter.CompositeMessageConverter.fromMessage(CompositeMessageConverter.java:55) ~[spring-messaging-4.3.14.RELEASE.jar:4.3.14.RELEASE]
at org.springframework.cloud.stream.binder.kstream.KStreamListenerParameterAdapter$1.apply(KStreamListenerParameterAdapter.java:66) ~[spring-cloud-stream-binder-kstream-1.3.2.RELEASE.jar:1.3.2.RELEASE]
at org.apache.kafka.streams.kstream.internals.KStreamMap$KStreamMapProcessor.process(KStreamMap.java:42) ~[kafka-streams-0.10.1.1.jar:na]
at org.apache.kafka.streams.processor.internals.ProcessorNode.process(ProcessorNode.java:82) ~[kafka-streams-0.10.1.1.jar:na]
at org.apache.kafka.streams.processor.internals.ProcessorContextImpl.forward(ProcessorContextImpl.java:202) ~[kafka-streams-0.10.1.1.jar:na]
at org.apache.kafka.streams.processor.internals.SourceNode.process(SourceNode.java:66) ~[kafka-streams-0.10.1.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamTask.process(StreamTask.java:180) ~[kafka-streams-0.10.1.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:436) ~[kafka-streams-0.10.1.1.jar:na]
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:242) ~[kafka-streams-0.10.1.1.jar:na]
Caused by: com.fasterxml.jackson.core.JsonParseException: Unrecognized token 'lol': was expecting ('true', 'false' or 'null')
at [Source: lol; line: 1, column: 7]
at com.fasterxml.jackson.core.JsonParser._constructError(JsonParser.java:1702) ~[jackson-core-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.core.base.ParserMinimalBase._reportError(ParserMinimalBase.java:558) ~[jackson-core-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._reportInvalidToken(ReaderBasedJsonParser.java:2839) ~[jackson-core-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser._handleOddValue(ReaderBasedJsonParser.java:1903) ~[jackson-core-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.core.json.ReaderBasedJsonParser.nextToken(ReaderBasedJsonParser.java:749) ~[jackson-core-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.databind.ObjectMapper._initForReading(ObjectMapper.java:3850) ~[jackson-databind-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.databind.ObjectMapper._readMapAndClose(ObjectMapper.java:3799) ~[jackson-databind-2.8.10.jar:2.8.10]
at com.fasterxml.jackson.databind.ObjectMapper.readTree(ObjectMapper.java:2397) ~[jackson-databind-2.8.10.jar:2.8.10]
at org.springframework.tuple.JsonStringToTupleConverter.convert(JsonStringToTupleConverter.java:44) ~[spring-tuple-1.0.0.RELEASE.jar:na]
I would expect that the framework has at least some fault tolerance for such behaviour. You cannot expect input to be nice and pretty always. So I looked into Spring documentation: https://docs.spring.io/spring-cloud-stream/docs/current/reference/htmlsingle/#_configuration_options
and there are some configuration options for what seems to be some hidden implementation of retry logic incase of failures. For instance the maxAttempts
parameter. But this parameter already has the default value of 3 used and yet I don't see Spring cloud stream applications making any attempts to rescue from this error.
So I would like to know what is the recommended way of building some bad input tolerance for Spring cloud stream applications.
The configuration for the application looks like this:
spring:
cloud:
stream:
bindings:
input:
content-type: application/json
destination: inbound
group: fraud
consumer:
headerMode: raw
output:
content-type: application/x-spring-tuple
destination: outbound
producer:
headerMode: raw
useNativeEncoding: true
spring.cloud.stream.kstream.binder.configuration:
key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde