3
votes

Kafka is very common. So many companies use it. I completely understand how both Kafka and Spark work and I am experienced with both of them. What I don't understand is the use cases. Why would you use Kafka with Spark, rather than just Spark?

As I see it, Kafka's main usage is as a staging area in an ETL pipeline for real time (streaming) data.

I imagine that there is a data source cluster where the data is originally stored in. It can be for example Vertica, Cassandra, Hadoop etc.

Then there is a processing cluster that reads the data from the data source cluster, and write it to a distributed Kafka log, which is basically a staging data cluster.

Then there is another processing cluster - a Spark cluster that reads the data from Kafka, makes some transformations and aggregations on the data and write it to the final destination.

If what I imagine is correct, I can just cut Kafka from the middle, and in a Spark program that runs on a Spark cluster, the driver will read the data from the original source and will parallelize it for processing. What is the advantage of placing Kafka in the middle?

Can you give me concrete use cases where Kafka is helpful rather than just reading the data to Spark in the first place, without going through Kafka?

1
In some companies, data is written to Kafka as the primary store. From there, it's then written to Cassandra, Hadoop, etc. Besides that, Kafka has more APIs than just producer and consumer. And the same question could be asked about other Streaming technologies as well as (Streamsets, Flink, Beam, for example) - OneCricketeer
@cricket_007 Let's say we are talking about an online advertising company that has a Java Netty web server that receives requests inform it regarding ad impressions or clicks, so if I get you right, the Netty web server writes the impressions and clicks directly from memory to Kafka as the first data store that ever stores this data, and later then Spark/Flink/Storm/Samza reads the data from Kafka, processes it and writes it in a more structured way to Cassandra/Hadoop/etc.. Am I correct? - Alon
It doesn't have to be Java web services, but yes, that's entirely possible - OneCricketeer
@cricket_007 I know it doesn't have to be Java. I just tried to give a concrete example. I think the whole end-to-end process is a lot more clear to me now, thanks. You said that Kafka has more usability than that though. Could you please give me an example to a differnt scenario, where Kafka is playing a different role? - Alon
Kafka Streams doesn't require a standalone cluster like Spark to transform, filter, join Kafka events. Kafka Connect can be used to stream messages between external systems, only requiring a configuration file in most cases, not constantly writing the same ETL code, or tuning of executors and memory, like you would have to do in Spark... Those are the APIs outside of producer/consumer - OneCricketeer

1 Answers

2
votes

Kafka Streams directly addresses a lot of the difficult problems in stream processing:

  • Event-at-a-time processing (not micro batch) with millisecond latency.
  • State full processing including distributed joins and aggregations.
  • A convenient DSL.
  • Windowing with out-of-order data using a DataFlow-like model.
  • Distributed processing and fault-tolerance with fast fail over.
  • No-downtime rolling deployments.

Apache Spark can be used with Kafka to stream the data, but if you are deploying a Spark cluster for the sole purpose of this new application, that is definitely a big complexity hit.

just Kafka and your application. It also balances the processing loads as new instances of your app are added or existing ones crash. And maintains local state for tables and helps in recovering from failure.

So, what should you use?

The low latency and an easy-to-use event time support also apply to Kafka Streams. It is a rather focused library, and it’s very well-suited for certain types of tasks. That’s also why some of its design can be so optimized for how Kafka works. You don’t need to set up any kind of special Kafka Streams cluster, and there is no cluster manager. And if you need to do a simple Kafka topic-to-topic transformation, count elements by key, enrich a stream with data from another topic, or run an aggregation or only real-time processing — Kafka Streams is for you.

If event time is not relevant and latenciy in the seconds range are acceptable, Spark is the first choice. It is stable and almost any type of system can be easily integrated. In addition it comes with every Hadoop distribution. Furthermore, the code used for batch applications can also be used for the streaming applications as the API is the same.

Kafka can easily handle multiple Sources in single Topic but same in Spark will be complex to handle. But with the help of Kafka, it makes very simple.

Link refer: https://dzone.com/articles/spark-streaming-vs-kafka-stream-1