26
votes

Could anyone compare Flink and Spark as platforms for machine learning? Which is potentially better for iterative algorithms? Link to the general Flink vs Spark discussion: What is the difference between Apache Spark and Apache Flink?

2
Flink is a relatively young project and it's hard to compare this new promising framework with such a giant project as Spark.Nikita
I won't answer this question now because we will have a deeper look in the near future on both ML frameworks. For now I totally agree with @ipoteka.Matthias Kricke
You should check out Flink's recently created Machine Learning Library: ci.apache.org/projects/flink/flink-docs-master/libs/ml. As you can see here, we've planned to do much more: goo.gl/h9Qmt3Robert Metzger

2 Answers

25
votes

Disclaimer: I'm a PMC member of Apache Flink. My answer focuses on the differences of executing iterations in Flink and Spark.

Apache Spark executes iterations by loop unrolling. This means that for each iteration a new set of tasks/operators is scheduled and executed. Spark does that very efficiently because it is very good at low-latency task scheduling (same mechanism is used for Spark streaming btw.) and caches data in-memory across iterations. Therefore, each iteration operates on the result of the previous iteration which is held in memory. In Spark, iterations are implemented as regular for-loops (see Logistic Regression example).

Flink executes programs with iterations as cyclic data flows. This means that a data flow program (and all its operators) is scheduled just once and the data is fed back from the tail of an iteration to its head. Basically, data is flowing in cycles around the operators within an iteration. Since operators are just scheduled once, they can maintain a state over all iterations. Flink's API offers two dedicated iteration operators to specify iterations: 1) bulk iterations, which are conceptually similar to loop unrolling, and 2) delta iterations. Delta iterations can significantly speed up certain algorithms because the work in each iteration decreases as the number of iterations goes on. For example the 10th iteration of a delta iteration PageRank implementation completes much faster than the first iteration.

0
votes

From my experience on ML and data stream processing. Flink and Spark are good at different fields and they can be complementary for each other in ML scenarios. Flink is competent with online learning task in which we keep updating the partial model by consuming new events while doing inference both in real-time. And the partial model can also merge the pre-trained model built on the history data offline by Spark.