3
votes

I have a service that sends 10k PUT requests to S3 for every second. The S3 was able to handle those loads for several minutes but started to throw SlowDown exception after that. It slows down my service to an unacceptable rate.

I have read this and implemented the suggested best practice. Following is the format of the prefix: bucket-name/[First four of UUID]-[YYYYmmddhhiiss]/[random UUID]/[random UUID].json. The method didn't work though.

Any idea how to overcome this error? Thank you! P.S: I have requested the PUT limit increase to AWS Support center. They suggested the above steps which didn't work.

1
10K PUT requests every second, continuously, would cost ~$129,600.00/month... so, what's your use case for such a large amount of traffic?Michael - sqlbot
@michael-sqlbot The workflow is simple. Every request will create PUT request to S3. The number of requests per second will be around 10k. We are going to move to the batch system so we can decrease the PUT requests.Vincent acent
How many requests have you been able to successfully make, per second? How long does this process last? It obviously does not run 24 × 7 × 365, which is what "every second" implies.Michael - sqlbot
The 10k RPS is expected requests per second on the production. It might not be constantly in 10k requests per second, but it will be around that. Re: How many requests have you been able to successfully make It is very hard to track that metric. I am going to find a way to track that.Vincent acent

1 Answers

1
votes

S3 is distributed, and you need to make sure that you don't create hotspots. You can avoid this by ensuring that your object keys are truly random.

So move the [randome UUID] to the first part of your object key. And if you're not generating truly random UUIDs, (i.e. it sounds like perhaps the first 4 characters are similar for each object) try reversing the UUID.

More tips can be found here