3
votes

I am trying graphx with live journal data, https://snap.stanford.edu/data/soc-LiveJournal1.html.

I have a cluster of 10 computing nodes. Each computing node has 64G RAM and 32 cores.

When I run pagerank algorithm using 9 worker nodes, it's slower than running it using just 1 woker node. I suspect I am not utilizing all the memory and/or cores due to some configuration issues.

I went through configuration, tuning and programming guide for spark.

I am using spark-shell to run the script which invoke by

./spark-shell --executor-memory 50g

I had the workers and master running. when I start the spark-shell I get the following logs

14/07/09 17:26:10 INFO Slf4jLogger: Slf4jLogger started
14/07/09 17:26:10 INFO Remoting: Starting remoting
14/07/09 17:26:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://[email protected]:60035]
14/07/09 17:26:10 INFO Remoting: Remoting now listens on addresses: [akka.tcp://[email protected]:60035]
14/07/09 17:26:10 INFO SparkEnv: Registering MapOutputTracker
14/07/09 17:26:10 INFO SparkEnv: Registering BlockManagerMaster
14/07/09 17:26:10 INFO DiskBlockManager: Created local directory at /tmp/spark-local-20140709172610-7f5e
14/07/09 17:26:10 INFO MemoryStore: MemoryStore started with capacity 294.4 MB.
14/07/09 17:26:10 INFO ConnectionManager: Bound socket to port 45700 with id = ConnectionManagerId(node0472.local,45700)
14/07/09 17:26:10 INFO BlockManagerMaster: Trying to register BlockManager
14/07/09 17:26:10 INFO BlockManagerInfo: Registering block manager node0472.local:45700 with 294.4 MB RAM
14/07/09 17:26:10 INFO BlockManagerMaster: Registered BlockManager
14/07/09 17:26:10 INFO HttpServer: Starting HTTP Server
14/07/09 17:26:10 INFO HttpBroadcast: Broadcast server started at http://172.16.104.72:48116
14/07/09 17:26:10 INFO HttpFileServer: HTTP File server directory is /tmp/spark-7b4a7c3c-9fc9-4a64-b2ac-5f328abe9265
14/07/09 17:26:10 INFO HttpServer: Starting HTTP Server
14/07/09 17:26:11 INFO SparkUI: Started SparkUI at http://node0472.local:4040
14/07/09 17:26:12 INFO AppClient$ClientActor: Connecting to master spark://node0472.local:7077...
14/07/09 17:26:12 INFO SparkILoop: Created spark context..
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Connected to Spark cluster with app ID app-20140709172612-0007
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/0 on worker-20140709162149-node0476.local-53728 (node0476.local:53728) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/0 on hostPort node0476.local:53728 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/1 on worker-20140709162145-node0475.local-56009 (node0475.local:56009) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/1 on hostPort node0475.local:56009 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/2 on worker-20140709162141-node0474.local-58108 (node0474.local:58108) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/2 on hostPort node0474.local:58108 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/3 on worker-20140709170011-node0480.local-49021 (node0480.local:49021) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/3 on hostPort node0480.local:49021 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/4 on worker-20140709165929-node0479.local-53886 (node0479.local:53886) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/4 on hostPort node0479.local:53886 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/5 on worker-20140709170036-node0481.local-60958 (node0481.local:60958) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/5 on hostPort node0481.local:60958 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/6 on worker-20140709162151-node0477.local-44550 (node0477.local:44550) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/6 on hostPort node0477.local:44550 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/7 on worker-20140709162138-node0473.local-42025 (node0473.local:42025) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/7 on hostPort node0473.local:42025 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor added: app-20140709172612-0007/8 on worker-20140709162156-node0478.local-52943 (node0478.local:52943) with 32 cores
14/07/09 17:26:12 INFO SparkDeploySchedulerBackend: Granted executor ID app-20140709172612-0007/8 on hostPort node0478.local:52943 with 32 cores, 50.0 GB RAM
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/1 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/0 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/2 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/3 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/6 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/4 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/5 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/8 is now RUNNING
14/07/09 17:26:12 INFO AppClient$ClientActor: Executor updated: app-20140709172612-0007/7 is now RUNNING
Spark context available as sc.

scala> 14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:47343/user/Executor#1253632521] with ID 4
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:39431/user/Executor#1607018658] with ID 2
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:53722/user/Executor#-1846270627] with ID 5
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:40185/user/Executor#-111495591] with ID 6
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:36426/user/Executor#652192289] with ID 7
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:37230/user/Executor#-1581927012] with ID 3
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:46363/user/Executor#-182973444] with ID 1
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:58053/user/Executor#609775393] with ID 0
14/07/09 17:26:18 INFO SparkDeploySchedulerBackend: Registered executor: Actor[akka.tcp://[email protected]:55152/user/Executor#-2126598605] with ID 8
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0474.local:60025 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0473.local:33992 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0481.local:46513 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0477.local:37455 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0475.local:33829 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0479.local:56433 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0480.local:38134 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0476.local:46284 with 28.8 GB RAM
14/07/09 17:26:19 INFO BlockManagerInfo: Registering block manager node0478.local:43187 with 28.8 GB RAM

According to logs, I believe my application was registered on workers and each executor had 50g of RAM. Now, I run the following scala code on my terminal to load data and compute pagerank

import org.apache.spark._
import org.apache.spark.graphx._
import org.apache.spark.rdd.RDD

val startgraphloading = System.currentTimeMillis;
val graph = GraphLoader.edgeListFile(sc, "filepath").cache();
graph.cache();
val endgraphloading = System.currentTimeMillis;

val startpr1 = System.currentTimeMillis;
val prGraph = graph.staticPageRank(1)
val endpr1   = System.currentTimeMillis;

val startpr2 = System.currentTimeMillis;
val prGraph = graph.staticPageRank(5)
val endpr2   = System.currentTimeMillis;

val loadingt = endgraphloading - startgraphloading;
val firstt   = endpr1 - startpr1
val secondt  = endpr2 - startpr2

print(loadingt)
print(firstt)
print(secondt)

When I try to see memory usage on every node, Only 2-3 computing node RAM is actually being used. Is it correct? It runs faster with only 1 worker than with 9 workers.

I am using spark stand-alone cluster mode. Is there any issue with configuration?

Thanks in advance :)

1

1 Answers

4
votes

I figured a problem with this after looking at spark code. It was an issue in my script where I am using graphx.

val graph = GraphLoader.edgeListFile(sc, "filepath").cache();

When I looked at the constructor of edgeListFile it says minPartition=1. I thought it's a minimum partition but it's the partition size you want. I set it to number of nodes i.e. partitions I want, and done. Another thing to take care is, as mentioned in graphx programming guide, if you haven't built spark 1.0 from main branch. You should use your own partitionBy function. If graph is not partitioned properly, it'll cause some issues.

It took me a while to know this, Hope this info saves someone's time :)