2
votes

I am working on project where number of read/write requests are increases with increase in size of data. But as we are testing 50GB of data, we are making very high amount of read/write requests to s3 and s3 is throwing "please reduce your request rate" error. We cant choose option to reduce requests, so is there any possible way to use s3 more smartly to avoid this problem. Any help will be appreciated.

1
What would you say is your current rate of GET and PUT (and any other) requests per second?Michael - sqlbot
This is a possible duplicate of stackoverflow.com/questions/52443839/….ingomueller.net

1 Answers

1
votes

You need to distribute the load across many s3 prefixes.

Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/POST/DELETE and 5,500 GET requests per second per prefix in a bucket. There are no limits to the number of prefixes in a bucket. It is simple to increase your read or write performance exponentially. For example, if you create 10 prefixes in an Amazon S3 bucket to parallelize reads, you could scale your read performance to 55,000 read requests per second.

Check here