0
votes

The number of tasks in Spark is decided by the total number of RDD partitions at the beginning of stages. For example, when a Spark application is reading data from HDFS, the partition method for Hadoop RDD is inherited from FileInputFormat in MapReduce, which is affected by the size of HDFS blocks, the value of mapred.min.split.size and the compression method, etc.

The screenshot of my tasks

The tasks in the screenshot took 7, 7, 4 seconds, and I want to make them balanced. Also, the stage is split into 3 tasks, are there any ways to specify Spark the number of partitions/tasks?

1
You could do the .repartition(200) operation first: spark.apache.org/docs/latest/… Nevertheless, the input size is really small, therefore the number of HDFS blocks will also be low. For optimal performance of HDFS the block should be approximately equal to the block size. You could repartition in Spark the distribute the data among more executors. - Fokko Driesprong

1 Answers

0
votes

The task dependents on the partition. You can set the partitioner for the RDD, In the partitioner you can set the number of partitions.