4
votes

I am using a stored procedure to bulk insert documents which is taken from the documentation sample.

The documents are inserted every minute in batches, performance level of collection is S2. After about 15000 documents are inserted over a period of a few days, my stored procedure consistently gets blocked and I get the following exception:

Microsoft.Azure.Documents.ForbiddenException, message: {"Errors":["The script with id 'xxx' is blocked for execution because it has violated its allowed resource limit several times."]}

However, it's completely unclear what exactly I am violating - number of requests? Total number of documents? Document's size? By looking at the DocumentDB limitations and description of performance levels I can't figure out what limitation I am hitting and what I can do to fix it. Neither do I get any warnings or alerts in the Azure portal...

What do I do in this situation? Go round-robin with multiple stored procedures ;) ?

2

2 Answers

1
votes

I too was experiencing this so I came up with a workaround. Whenever I get this out of resources response, I actually remove and recreate the stored procedure. I created a library that takes care of this automatically based upon the response. It also takes care of automatic delay from throttling response and includes a bunch of other conveniences.

See:

3
votes

Server-side scripts are resource governed in terms CPU, memory, and IO to prevent abuse and avoid noisy neighbor issues.

The Boolean flag returned from document CRUD operations is meant to signal when resources are exhausted and it's time for the the script exit gracefully.

In this case, the script properly yields to the document CRUD Boolean... This appears to be a bug on DocumentDB's side; the script should never get blacklisted. I'll cause some noise and get this fixed for you promptly.

Update (5/5/15): We believe we've tracked down the bug; will attempt to deploy a fix later this week. In the mean time, here are two workarounds (choose either)... 1) You can re-create the sproc on each run. This allows you to completely side-step blacklisting. 2) For bulk-importing documents specifically, you can cap the number of documents created per run to <= 100 documents for a S1 collection and <= 1000 documents for a S3 collection. This should bring the script's resource-consumption below the current blacklisting thresholds.

Update (6/4/15): We have deployed a fix; please ping me if you experience scripts getting blocked: askcosmosdb {at} microsoft.com