I'm having an issue with Dynamo where the read throughput is well below the provisioned capacity without any visible throttling in the graphs.
My table has 100GB of data similar to:
| Partition Key | Sort Key | Value
| A | A1 | 1
| A | A2 | 21
| A | A3 | 231
...
| A | A200 | 31
| B | B1 | 5
This structure cannot change too much as it is important that I can query all values associated to a given key ( and more complex queries based on the sort key associated to a given partition key). . This caused me to have throttled writes as it must be hitting the same partitions frequently, but what is really strange is the read throughput. The table has 1000 read units provisioned, but the maximum recorded throughput is 600 reads per second. This is consistent with up to 10.000 provisioned read units per second.
On the client side, I'm sending 1000 requests per second (uniformly, using a rate limiter) so theoretically, the read throughput should be 1000 reads per second. Even if the number of requests is increased on the client-side, the rate stays the same, and there are zero throttled reads.
The client is running on an EC2 m4.2xlarge instance in the same region as Dynamo. I've ruled out an issue with the client as the CPU usage is fairly low, and there is plenty of memory available.
Any thoughts on what could be causing this?