1
votes

I have a number of Mapping Data flows that have been running regularly for the past several months, and some of them started failing yesterday.

The data flow pattern is -

Source: 2 Azure SQL DB tables, a lookup table in Synapse

Sink: 1 table in Synapse (Azure SQL DB)

We have enabled Polybase Staging for better performance, as each activity takes too long without it, and have a linked service to an Azure Blob Storage account for this.

Last night's run failed midway for some of our larger tables with the following error, but the smaller tables were all successful. Nothing has changed on any of these pipelines, or on any of the linked services in several months.

Going into debug mode, I can't look at the data preview for any of the Synapse sink activities unless I disable the 'Staging' option in settings. If I try with Staging enabled, it says "Blob storage staging properties should be specified", which I have entered in debug settings, yet still get the error.

The strange thing is that this problem is only occurring on the data flows moving larger amounts of data, the smaller tables are fine in debug mode as well. All these data flows were successful 2 days ago, so is this perhaps a space issue in Blob Storage?

The pipeline activity error code:

{"StatusCode":"DFExecutorUserError",
"Message":"Job failed due to reason: at Sink 'SinkIntoSynapse': 
java.sql.BatchUpdateException: There are no batches in the input script.",
"Details":"at Sink 'SinkIntoSynapse': 
java.sql.BatchUpdateException: There are no batches in the input script."}
1

1 Answers

0
votes

I have seen this caused by having a commented out SQL statement in the Pre-copy script section of the sink setting.

If you have anything in the Pre-copy script section, try removing it before publishing and running the data facatory again.