I am trying to run my spark program using spark submit on yarn cluster, I am reading an external config file which is put in the hdfs, I am running the job as-
./spark-submit --class com.sample.samplepack.AnalyticsBatch --master yarn-cluster --num-executors 3 --driver-memory 512m --executor-memory 512m --executor-cores 1 --driver-java-options "-Dext.properties.dir=hdfs://namenode:8020/tmp/some.conf" PocSpark-1.0-SNAPSHOT-job.jar 10
But it is unable to read the file from hdfs, I have also tried to run the job on local mode with conf file as hdfs path and I am getting-
java.io.FileNotFoundException: hdfs:/namenode:8020/tmp/some.conf (No such file or directory)
Here the after hdfs protocol forward slash is missing. Any help will be appreciated here.
hadoop fs -ls /tmp/- NikitaHADOOP_CONF_DIR. Typeecho $HADOOP_CONF_DIRin console to check? - Nikita