I'm trying to use an ETL job to directly write my dataframe to a database catalog and update my partitions.
I had a code like this :
datasink4 = glueContext.write_dynamic_frame.from_options(
frame = dropnullfields3,
connection_type = "s3",
connection_options = {
"path": TARGET_PATH,
"partitionKeys":["x", "y"]
},
format = "parquet",
transformation_ctx = "datasink4")
additionalOptions = {"enableUpdateCatalog": True}
additionalOptions["partitionKeys"] = ["x", "y"]
sink = glueContext.write_dynamic_frame_from_catalog(frame=dropnullfields3,
database=DATABASE,
table_name=TABLE,
transformation_ctx="write_sink",
additional_options=additionalOptions)
which worked to write the data into the catalog. However I would like to avoid the double write. So I followed the method 2 from the documentation to update partitions : https://docs.aws.amazon.com/glue/latest/dg/update-from-job.html
And came with this code :
datasink4 = glueContext.write_dynamic_frame.from_options(
frame = dropnullfields3,
connection_type = "s3",
connection_options = {
"path": TARGET_PATH,
"partitionKeys":["x", "y"]
},
format = "parquet",
transformation_ctx = "datasink4")
sink = glueContext.getSink(connection_type="s3", path=TARGET_PATH,
enableUpdateCatalog=True,
partitionKeys=["x", "y"])
sink.setFormat("glueparquet")
sink.setCatalogInfo(catalogDatabase=DATABASE, catalogTableName=TABLE)
sink.writeFrame(dropnullfields3)
But now the data can't be loaded in Athena, I get strange errors about the data structure like this :
HIVE_METASTORE_ERROR: com.facebook.presto.spi.PrestoException: Error: < expected at the end of 'struct' (Service: null; Status Code: 0; Error Code: null; Request ID: null)
I have tried to recreate the table to have only the new files in glueparquet.
I have also tried to run a crawler on the new glueparquet files, the table generated from the crawler can be queried. However when I fill the same table from the ETL job above I get always this error...