I have set up Spark Standalone cluster on Kubernetes, and I am trying to connect to a Kerberized Hadoop cluster which is NOT on Kubernetes. I have placed core-site.xml and hdfs-site.xml in my Spark cluster's container and have set HADOOP_CONF_DIR accordingly. I am able to successfully generate the kerberos credential cache in the Spark container for principal which accesses the Hadoop cluster. But when I run spark-submit, it fails with below Access control exception in worker. Note - master and workers are running in separate Kubernetes pods.
spark-submit --master spark://master-svc:7077 --class myMainClass myApp.jar
Client cannot authenticate via: [TOKEN, KERBEROS]
However, when I run spark-submit from the Spark container in local mode, it's able to talk to Hadoop cluster successfully.
spark-submit --master local[*] --class myMainClass myApp.jar
Is there any configuration I need to set to make Worker use the credential cache in Spark Stand alone mode?