Summary: I can't get my python-spark job to run on all nodes of my hadoop cluster. I've installed the spark for hadoop 'spark-1.5.2-bin-hadoop2.6'. When launching a java spark job, the load gets distributed over all nodes, when launching a python spark job, only the one node takes the load.
Setup:
- hdfs and yarn configured for 4 nodes: nk01 (namenode), nk02, nk03, nk04, running on xen virtual servers
- versions: jdk1.8.0_66, hadoop-2.7.1, spark-1.5.2-bin-hadoop2.6
- hadoop installed all 4 nodes
- spark only installed on nk01
I copied a bunch of Gutenberg files (thank you, Johannes!) onto hdfs, and try doing a wordcount using java and python on a subset of the files (the files that start with an 'e') :
Python:
Using a homebrew python script for doing wordcount:
/opt/spark/bin/spark-submit wordcount.py --master yarn-cluster \
--num-executors 4 --executor-cores 1
The Python code assigns 4 partions:
tt=sc.textFile('/user/me/gutenberg/text/e*.txt',4)
Load on the 4 nodes during 60 seconds:
Java:
Using the JavaWordCount found in the spark distribution:
/opt/spark/bin/spark-submit --class JavaWordCount --master yarn-cluster \
--num-executors 4 jwc.jar '/user/me/gutenberg/text/e*.txt'
Conclusion: the java version distributes its load across the cluster, the python version just runs on 1 node.
Question: how do I get the python version also to distribute the load across all nodes?


