4
votes

We are importing azure database from bacpac in blob and it takes 2 days with 100 %DTU. Sometimes I accept a quick but expensive restore (but want to pay additional costs only to restore time), how can I do it?

If I will change tier to higher level with 100 or 200 DTUs, will it speed up restore? Also what if I do it after starting of restore, will changes be applied or I have to do it initially while starting import?

After restore I will switch back to desired S2 tier. Another subquestion is what potential risk of downgrading database tier? At least I know about needs to disable geo-replication.

According to https://msdn.microsoft.com/en-us/library/azure/dn369872.aspx, it may takes hours to apply and normalize performance after upgrade and usually momentally after downgrade.

So ideal way it to start restore at maximal P6 tier, then downgrade to S2 and it must be momentally in most cases?

Particular case details: Database size ~ 60 gb, bacpac ~ 5 gb, S2 Standard (50 DTUs). I saw 100% DTU percentage for whole restore time, and for last day import-export history stuck at "STATUS Running, Progress = 94.81 %" while database size increasing slowly from 30 to 60 Gb. DTU and database size char

'select * from sys.dm_db_resource_stats' gives for example this

avg_cpu_percent=42.96
avg_data_io_percent=37.08
avg_log_write_percent=91.65
avg_memory_usage_percent=83.67
2
It would be great to get an understanding for why you are using Import/Export when you have built-in backup restore. If you can leverage the built-in backup restore functionality, your restores should be much faster than imports.elfisher
We have two azure accounts (one for production, one for development) and have to move database between them. Export to bacpac, move them between blobs by server-side using asynchronous cross-account copy blob and import then is a right way as I know.Danila Polevshikov
Did you ever get resolution on this? Did switching tiers impact the restore time?lucuma
Yes, better tier decrease restore timeDanila Polevshikov
I'm running a large import right now and the max DTU usage I'm getting is 40%, in your experience can one not expect 100%?tbone

2 Answers

3
votes

Based on the scenario you have described it would be better to use the database copy TSQL to do a cross server database copy. I'm assuming when you export you first create a database copy to guarantee that you end up with a transactionally consistent BACPAC as suggested by this article. If this is the case, you will save some performance by just directly copying the database to the destination account and server. Cross server copy will work as long as the source server and target server have the same SQL login information.

Once you have confirmed the two servers share the same login credentials, all you'll need to do is login to the master database of the server you want to copy the database to, and run the following TSQL:

CREATE DATABASE [NEW_DATABASE_NAME] AS COPY OF [SOURCE_SERVER_NAME].[SOURCE_DATABASE_NAME]

This should significantly save you time as the exporting method requires you to read out all of data and schema of the database. Then you would need to copy that file (moving the data for a second time). Then you would need to read in all of the data to a new database (moving the data for a third time).

Having said all of this, if you absolutely need to use the export and import service, exporting and importing with a higher service tier will be faster than a lower one because you have more resources to read data out and in faster.

Please reply to this if you want anything clarified.

2
votes

From what you've described Azure SQL DB is performing as expected. The DTUs associated with your service-tier and performance-level do exactly impact the import time. You should upgrade to a higher performance-level as you import and down grade afterwards.

I hope this helps