3
votes

I am inserting records into sql using SqlBulkCopy and FastMember

Locally I can insert 100k records in about 2 seconds. When I run this on an Azure web app webjob and an Azure Sql Database it's taking more than 10 minutes and timing out the transaction. The table definition is the same, similar amounts of data in the table etc. There are no locks, it's just slow. When I run it locally and try to write to the Azure Sql database it takes > 10 minutes too.

The actual call is as simple as it could be:

using(var bulkCopy = new SqlBulkCopy(connection){DestinationTableName="Table")
using(var reader = ObjectReader.Create(entities, columnList)
{
   await bulkCopy.WriteToServerAsync(reader).ConfigureAwait(false);
}

I've tried removing the transaction using TransactionScope(Suppress) but it's made no difference.

Could anyone help in either letting me know what stupid mistake I've made or giving me some hints as to how to diagnose this? It's getting really frustrating! The difference in time is just so big that I'm sure that I've missed something fundamental here.

2
What is your database tier..?Also what part of DTU(log/cpu/Io/Memory) is maxed out during this?TheGameiswar
S3/P1 (similar results) and the DTU is not maxed in either case. The DTU spikes to about 70% briefly, then sits around 10-20%Carl
Then you must be having waits..Can you gather waitstats during this periodTheGameiswar

2 Answers

0
votes

You can run below query to verify a high log write percent exist while that workload is running:

SELECT 
    (COUNT(end_time) - SUM(CASE WHEN avg_cpu_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'CPU Fit Percent'
    ,(COUNT(end_time) - SUM(CASE WHEN avg_log_write_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Log Write Fit Percent'
    ,(COUNT(end_time) - SUM(CASE WHEN avg_data_io_percent > 80 THEN 1 ELSE 0 END) * 1.0) / COUNT(end_time) AS 'Physical Data Read Fit Percent'
FROM sys.dm_db_resource_stats

You can also collect waits associated with that workload using one of the different methods explained on this article. You will probably see high IO related waits during execution of that workload.

To avoid throttling during execution of high IO intensive workloads you can scale up before running them and scale down to the initial tier after the workload finish.

ALTER DATABASE [db1] MODIFY (EDITION = 'Premium', SERVICE_OBJECTIVE = 'P15');
0
votes

Well. I removed all the indexes, and it made some difference, but the batch was still timing out at 10 minutes. I removed the outer ambient transaction scope (rather than using TransactionScope.Suppress) and all of a sudden the times are looking 'normal' again. It's taking about 50 seconds to insert, and it's getting closer to maxing the DTU while it's running, whereas before it only got to about 20%.

I still have no idea why it works locally in 2s with the ambient transaction, but it'll have to be one to chalk up to experience

thanks all who answered - and at least pointed me in a good direction to learn!