I am trying to Install: 1) A Cluster with Spark. 2) A Cluster with Hbase.
First attempt to install add a cluster with Bootstrapping Spark succeeded but, I used SSH wrong key, so I have to redo the install with new key. Since then (from second attempt) I am getting the same error on every attempt while installing 1 and 2 above.
I am following the instructions from: https://aws.amazon.com/articles/ElasticMapReduce/4926593393724923
My command from aws CLI: aws emr create-cluster --name GCSpark --ami-version 3.2 --instance-type m3.xlarge --instance-count 3 --ec2-attributes KeyName=KeyPair--applications Name=Hive --bootstrap-actions Path=s3://support.elasticmapreduce/spark/install-spark
Output from stderr.txt: + python install-spark-script BA 14/10/31 22:00:45 INFO guice.EmrFSBaseModule: Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as FileSystem implementation. 14/10/31 22:00:46 INFO fs.EmrFileSystem: Using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation 14/10/31 22:00:47 INFO s3n.S3NativeFileSystem: Opening 's3://support.elasticmapreduce/spark/1.1.0/scala-2.10.3.tgz' for reading 14/10/31 22:01:06 INFO guice.EmrFSBaseModule: Consistency disabled, using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as FileSystem implementation. 14/10/31 22:01:07 INFO fs.EmrFileSystem: Using com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem as filesystem implementation 14/10/31 22:01:08 INFO s3n.S3NativeFileSystem: Opening 's3://support.elasticmapreduce/spark/1.1.0/spark-1.1.0.e.tgz' for reading /bin/cp: cannot stat /usr/share/aws/emr/emr-fs/lib/: No such file or directory Traceback (most recent call last): File "install-spark-script", line 120, in prepare_classpath() File "install-spark-script", line 52, in prepare_classpath subprocess.check_call() File "/usr/lib64/python2.6/subprocess.py", line 505, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '' returned non-zero exit status 1*
Any help would be greatly appreciated.
Thanks!