I am setting up for spark with Hadoop 2.3.0 on Mesos 0.21.0. when I try spark on the master, I get these error messages fro stderr of mesos slave:
WARNING: Logging before InitGoogleLogging() is written to STDERR
I1229 12:34:45.923665 8571 fetcher.cpp:76] Fetching URI 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz'
I1229 12:34:45.925240 8571 fetcher.cpp:105] Downloading resource from 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' to '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'
E1229 12:34:45.927089 8571 fetcher.cpp:109] HDFS copyToLocal failed: hadoop fs -copyToLocal 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'
sh: 1: hadoop: not found
Failed to fetch: hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz
Failed to synchronize with slave (it's probably exited)
The interesting thing is that when i switch to the slave node and run the same command
hadoop fs -copyToLocal 'hdfs://10.170.207.41/spark/spark-1.2.0.tar.gz' '/tmp/mesos/slaves/20141226-161203-701475338-5050-6942-S0/frameworks/20141229-111020-701475338-5050-985-0001/executors/20141226-161203-701475338-5050-6942-S0/runs/8ef30e72-d8cf-4218-8a62-bccdf673b5aa/spark-1.2.0.tar.gz'
, it goes well.
hadoop
in their PATH, and execute permission? – Adam