10
votes

I have created a Spark cluster on Openstack running on Ubuntu14.04 with 8gb of ram. I created two virtual machines with 3gb each (keeping 2 gb for the parent OS). Further, i create a master and 2 workers from first virtual machine and 3 workers from second machine.

The spark-env.sh file has basic setting with

export SPARK_MASTER_IP=10.0.0.30
export SPARK_WORKER_INSTANCES=2
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=1

Whenever i deploy the cluster with start-all.sh, i get "failed to launch org.apache.spark.deploy.worker.Worker" and some times "failed to launch org.apache.spark.deploy.master.Master". When i see the log file to look for error i get the following

Spark Command: /usr/lib/jvm/java-7-openjdk-amd64/bin/java -cp >/home/ubuntu/spark-1.5.1/sbin/../conf/:/home/ubuntu/spark->1.5.1/assembly/target/scala-2.10/spark-assembly-1.5.1->hadoop2.2.0.jar:/home/ubuntu/spark-1.5.1/lib_managed/jars/datanucleus-api->jdo-3.2.6.jar:/home/ubuntu/spark-1.5.1/lib_managed/jars/datanucleus-core->3.2.10.jar:/home/ubuntu/spark-1.5.1/lib_managed/jars/datanucleus-rdbms->3.2.9.jar -Xms1g -Xmx1g -XX:MaxPermSize=256m >org.apache.spark.deploy.master.Master --ip 10.0.0.30 --port 7077 --webui->port 8080

Though I get the fail message but the master or worker become alive after a few seconds.

Can someone please explain the reason?

2
I switched logs from ERROR to INFO and I saw two warning. 1. Your hostname, worker1 resolves to a loopback address: 127.0.1.1; using 10.0.0.30 instead (on interface ethic) and 2. Unable to load native-hadoop library for your platform... using builtin-java classes where applicable. Can these be interfering with the cluster deployment?jsingh13

2 Answers

11
votes

The Spark configuration system is a mess of environment variables, argument flags, and Java Properties files. I just spent a couple hours tracking down the same warning, and unraveling the Spark initialization procedure, and here's what I found:

  1. sbin/start-all.sh calls sbin/start-master.sh (and then sbin/start-slaves.sh)
  2. sbin/start-master.sh calls sbin/spark-daemon.sh start org.apache.spark.deploy.master.Master ...
  3. sbin/spark-daemon.sh start ... forks off a call to bin/spark-class org.apache.spark.deploy.master.Master ..., captures the resulting process id (pid), sleeps for 2 seconds, and then checks whether that pid's command's name is "java"
  4. bin/spark-class is a bash script, so it starts out with the command name "bash", and proceeds to:
    1. (re-)load the Spark environment by sourcing bin/load-spark-env.sh
    2. finds the java executable
    3. finds the right Spark jar
    4. calls java ... org.apache.spark.launcher.Main ... to get the full classpath needed for a Spark deployment
    5. then finally hands over control, via exec, to java ... org.apache.spark.deploy.master.Master, at which point the command name becomes "java"

If steps 4.1 through 4.5 take longer than 2 seconds, which in my (and your) experience seems pretty much inevitable on a fresh OS where java has never been previously run, you'll get the "failed to launch" message, despite nothing actually having failed.

The slaves will complain for the same reason, and thrash around until the master is actually available, but they should keep retrying until they successfully connect to the master.

I've got a pretty standard Spark deployment running on EC2; I use:

  • conf/spark-defaults.conf to set spark.executor.memory and add some custom jars via spark.{driver,executor}.extraClassPath
  • conf/spark-env.sh to set SPARK_WORKER_CORES=$(($(nproc) * 2))
  • conf/slaves to list my slaves

Here's how I start a Spark deployment, bypassing some of the {bin,sbin}/*.sh minefield/maze:

# on master, with SPARK_HOME and conf/slaves set appropriately
mapfile -t ARGS < <(java -cp $SPARK_HOME/lib/spark-assembly-1.6.1-hadoop2.6.0.jar org.apache.spark.launcher.Main org.apache.spark.deploy.master.Master | tr '\0' '\n')
# $ARGS now contains the full call to start the master, which I daemonize with nohup
SPARK_PUBLIC_DNS=0.0.0.0 nohup "${ARGS[@]}" >> $SPARK_HOME/master.log 2>&1 < /dev/null &

I'm still using sbin/start-daemon.sh to start the slaves, since that's easier than calling nohup within the ssh command:

MASTER=spark://$(hostname -i):7077
while read -r; do
  ssh -o StrictHostKeyChecking=no $REPLY "$SPARK_HOME/sbin/spark-daemon.sh start org.apache.spark.deploy.worker.Worker 1 $MASTER" &
done <$SPARK_HOME/conf/slaves
# this forks the ssh calls, so wait for them to exit before you logout

There! It assumes that I'm using all the default ports and stuff, and that I'm not doing stupid shit like putting whitespace in filenames, but I think it's cleaner this way.

2
votes

I have the same problem, running spark/sbin/start-slave.sh on master node.

hadoop@master:/opt/spark$ sudo ./sbin/start-slave.sh --master spark://master:7077
starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out
failed to launch: nice -n 0 /opt/spark/bin/spark-class org.apache.spark.deploy.worker.Worker --webui-port 8081 --master spark://master:7077
  Options:
    -c CORES, --cores CORES  Number of cores to use
    -m MEM, --memory MEM     Amount of memory to use (e.g. 1000M, 2G)
    -d DIR, --work-dir DIR   Directory to run apps in (default: SPARK_HOME/work)
    -i HOST, --ip IP         Hostname to listen on (deprecated, please use --host or -h)
    -h HOST, --host HOST     Hostname to listen on
    -p PORT, --port PORT     Port to listen on (default: random)
    --webui-port PORT        Port for web UI (default: 8081)
    --properties-file FILE   Path to a custom Spark properties file.
                             Default is conf/spark-defaults.conf.
full log in /opt/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-master.out

I found my fault, I should not use --master keyword and just run command

hadoop@master:/opt/spark$ sudo ./sbin/start-slave.sh spark://master:7077

following the steps of this tutorial: https://phoenixnap.com/kb/install-spark-on-ubuntu

Hint: make sure to install all dependencies before:

sudo apt install scala git -y