1
votes

I have Spark jobs on EMR, and EMR is configured to use the Glue catalog for Hive and Spark metadata.

I create Hive external tables, and they appear in the Glue catalog, and my Spark jobs can reference them in Spark SQL like spark.sql("select * from hive_table ...")

Now, when I try to run the same code in a Glue job, it fails with "table not found" error. It looks like Glue jobs are not using the Glue catalog for Spark SQL the same way that Spark SQL would running in EMR.

I can work around this by using Glue APIs and registering dataframes as temp views:

create_dynamic_frame_from_catalog(...).toDF().createOrReplaceTempView(...)

but is there a way to do this automatically?

3
How did you created spark object? Did you enable enableHiveSupport() with it?Prabhakar Reddy
glueContext = GlueContext(SparkContext.getOrCreate()) and then spark = glueContext.spark_sessionwrschneider
Are you trying to run your Glue jobs in EMR ?Prabhakar Reddy
No, the opposite, trying to run something that would have worked in EMR on Glue.wrschneider

3 Answers

3
votes

This was a much awaited feature request (to use Glue Data Catalog with Glue ETL jobs) which has been release recently. When you create a new job, you'll find the following option

Use Glue data catalog as the Hive metastore

You may also enable it for an existing job by editing the job and adding --enable-glue-datacatalog in the job parameters providing no value

1
votes

Instead of using SparkContext.getOrCreate(), you should use SparkSession.builder().enableHiveSupport().getOrCreate(), with enableHiveSupport() being the important part that's missing. I think what's probably happening is that your Spark job is not actually creating your tables in Glue but rather is creating them in Spark's embedded Hive metastore, since you have not enabled Hive support.

0
votes

Had the same problem. It was working on my Dev endpoint but not the actual ETL job. It is fixed by editing the job from Spark 2.2 to Spark 2.4.