Well, using docker image i am able to access hive resources if using pyspark shell with driver-class-path :
$ pyspark --driver-class-path /etc/spark2/conf:/etc/hive/conf
Python 3.7.4 (default, Aug 13 2019, 20:35:49)
Using Python version 3.7.4 (default, Aug 13 2019 20:35:49)
SparkSession available as 'spark'.
>>> from pyspark.sql import SparkSession
>>>
>>> #declaration
... appName = "test_hive_minimal"
>>> master = "yarn"
>>>
... sc = SparkSession.builder \
... .appName(appName) \
... .master(master) \
... .enableHiveSupport() \
... .config("spark.hadoop.hive.enforce.bucketing", "True") \
... .config("spark.hadoop.hive.support.quoted.identifiers", "none") \
... .config("hive.exec.dynamic.partition", "True") \
... .config("hive.exec.dynamic.partition.mode", "nonstrict") \
... .getOrCreate()
>>> sql = "show tables in user_tables"
>>> df_new = sc.sql(sql)
20/08/20 15:08:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
>>> df_new.show()
+-----------+--------------------+-----------+
| database| tableName|isTemporary|
+-----------+--------------------+-----------+
|user_tables| dummyt| false|
|user_tables|abcdefg...dummytable| false|
but facing below error if using same script through spark-submit as below:
spark-submit --master local --deploy-mode cluster --name test_hive --executor-memory 2g --num-executors 1 -- test_hive_minimal.py --verbose
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/opt/conda/lib/python3.7/site-packages/pyspark/sql/session.py", line 767, in sql
return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
File "/opt/conda/lib/python3.7/site-packages/pyspark/python/lib/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
File "/opt/conda/lib/python3.7/site-packages/pyspark/sql/utils.py", line 71, in deco
raise AnalysisException(s.split(': ', 1)[1], stackTrace)
pyspark.sql.utils.AnalysisException: "Database 'user_tables' not found;"
test_hive_minimal.py is a simple script checking hive db:
from pyspark.sql import SparkSession
appName = "test_hive_minimal"
master = "yarn"
# Creating Spark session
sc = SparkSession.builder \
.appName(appName) \
.master(master) \
.enableHiveSupport() \
.config("spark.hadoop.hive.enforce.bucketing", "True") \
.config("spark.hadoop.hive.support.quoted.identifiers", "none") \
.config("hive.exec.dynamic.partition", "True") \
.config("hive.exec.dynamic.partition.mode", "nonstrict") \
.getOrCreate()
sql = "show tables in user_tables"
df_new = sc.sql(sql)
df_new.show()
sc.stop()
I tried several approaches, passing hive.metastore.uris, spark.sql.warehouse.dir as well as passing xml files as --files. Somehow my executors are not able to access the config it seems. Can anyone help with this?
UPDATE: I was successfully able to pass hive-site.xml as --files to spark-submit in cluster mode and log shows its no longer creating local derby.db for metastore. However, now facing another issue as below:
20/08/21 09:59:29 INFO state.StateStoreCoordinatorRef: Registered StateStoreCoordinator endpoint
20/08/21 09:59:31 INFO hive.HiveUtils: Initializing HiveMetastoreConnection version 1.2.1 using Spark classes.
20/08/21 09:59:31 INFO hive.metastore: Trying to connect to metastore with URI thrift://cluster01.cdh.com:9083
20/08/21 09:59:32 ERROR transport.TSaslTransport: SASL negotiation failure
javax.security.sasl.SaslException: GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]
at com.sun.security.sasl.gsskerb.GssKrb5Client.evaluateChallenge(GssKrb5Client.java:211)
at org.apache.thrift.transport.TSaslClientTransport.handleSaslStartMessage(TSaslClientTransport.java:94)
at org.apache.thrift.transport.TSaslTransport.open(TSaslTransport.java:271)
at org.apache.thrift.transport.TSaslClientTransport.open(TSaslClientTransport.java:37)
seems kerberos issue, but I already have valid kerberos token and able to access hdfs via terminal / also through spark-shell from the docker. What needs to be done here? doesnt this gets automatically provisioned by yarn when submitting on cluster?