0
votes

I am trying to run a spark job written in Java, on the Spark cluster to load records as dataframe into a Hive Table i created.

         df.write().mode("overwrite").insertInto(dbname.tablename);

Although the table and database exists in Hive, it throws below error: org.apache.spark.sql.AnalysisException: Table or view not found: dbname.tablename, the database dbname doesn't exist.;

I also tried reading from an existing hive table different than the above table thinking there might be an issue while my table creation. I also checked if my user has permission to the hdfs folder where the hive is storing the data. It all looks fine, not sure what could be the issue.

Please suggest.

Thanks

1
Why not use Structured Streaming if you are building a dataframe anyway? - OneCricketeer
have you check this using spark-shell. either table exists in spark environment or NOt.scala> spark.catalog.listTables().show(false) - Mahesh Gupta

1 Answers

-1
votes

I think it is searching for that table in spark instead of hive.