I'm trying to minimize changes in my code so I'm wondering if there is a way to submit a spark-streaming job from my personal PC/VM as follows:
spark-submit --class path.to.your.Class --master yarn --deploy-mode client \
[options] <app jar> [app options]
without using GCP SDK.
I also have to specify a directory with configuration files HADOOP_CONF_DIR
which I was able to download from Ambari.
Is there a way to do the same?
Thank you
dataproc jobs submit
orcompute ssh -c
could be used. Why do you not want to use cloud sdk? – tix--deploy-mode client
(I will fix my example) to be able to be able to run an external script. – AlexHADOOP CONFIGURATION FILES
– Alex