1
votes

I am setting up for spark with Hadoop 2.3.0 on Mesos 0.21.0. when I try spark on the master, I get these error messages fro stderr of mesos slave:

WARNING: Logging before InitGoogleLogging() is written to STDERR

I1229 12:34:45.923665 8571 fetcher.cpp:76] Fetching URI 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz'

I1229 12:34:45.925240 8571 fetcher.cpp:105] Downloading resource from 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' to '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'

E1229 12:34:45.927089 8571 fetcher.cpp:109] HDFS copyToLocal failed: hadoop fs -copyToLocal 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'

sh: 1: hadoop: not found

Failed to fetch: hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz

Failed to synchronize with slave (it's probably exited)

The interesting thing is that when i switch to the slave node and run the same command

hadoop fs -copyToLocal 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'

, it goes well.

1
I've checked the thread Hadoop 2.5.0 on Mesos 0.21.0 with library 0.0.8 executor error, which can not solve my problemfei_che_che
What user is mesos-slave run as? Does that user have hadoop in their PATH, and execute permission?Adam
it's root, and have hadoop in its PATHfei_che_che
(And root has execute permission on the actual hadoop binary?)Adam
I had the same error in logs. It turned out that I had forgotten to restart a slave that did not have HADOOP_HOME set or hadoop on the path. Once I restarted the master and slave, mesos found hadoop.jlb

1 Answers

0
votes

When starting mesos slave, you have to specify the path to your hadoop installation through the following parameter:

--hadoop_home=/path/to/hadoop

Without that it just didn't work for me, even though I had the HADOOP_HOME environment variable set up.