0
votes

I know there are posts around Azure SQL connection timeouts, but have not found the following case.

I've been using Azure SQL (S3 plan). Normally the DTU is very low, and there are no timeouts when apps connect to this DB.

The problem starts when we run batch jobs against the DB, such as updating certain column value for millions of rows. It may take hours to complete these batch jobs. During this period, the DTU value reaches the max and other apps fail with timeouts.

Are there guidelines on what should be done? Here are options I thought of.

  1. Upgrade to higher tier. This option likely works, but not attractive as the DTU is usually very low.
  2. Increasing timeouts for the apps that connect to DB. Not sure if this works, because timeout would have to be a very long time.

If there is a way to allocate a certain portion of DTU to the batch job (say 70%) and always keeps some DTU left for others, that'd be ideal, but I don't think it's possible. Any suggestion would be appreciated!

2

2 Answers

0
votes

Increasing timeouts is rarely a true solution, and can often make things worse.

First see if the operations being performed by the batch job can be made more efficient, by reviewing the query execution plans for missing or insufficient indexes, inefficient query logic, opportunities for caching, etc.

You could add a configuration setting and self-throttling logic to your batch job that allows you to control how many operations it may perform in a given timeframe, and then use that to determine what works best in your situation.

Maybe an easier option would be to just add a step to the beginning of your batch job that temporarily scales-up the database to a higher pricing tier when it starts, and then scales it back down when it finished.

ALTER DATABASE MyDatabaseName MODIFY (SERVICE_OBJECTIVE = 'P4')
0
votes

You can upgrade to higher tier before running batch job and then downgrade back to S3.