The provisioned write throughput is distributed among all shards of your table. Depending on the size and throughput of your table, your data is distributed among shards.
( readCapacityUnits / 3,000 ) + ( writeCapacityUnits / 1,000 ) = initialPartitions (rounded up)
A partition is created for every 10 GB of data as well.
See Understand Partition Behavior.
If your write requests are not distributed among multiple partition keys you will experience throttled requests before you are hitting the provisioned throughput.
In your specific case, your table exists of at least 5 paritions. Which means you are able to use a maximum 1,000 units of write capacity per second per shard.
Second thing to consider is the size of your items. Each write requests consums 1 write capacity unit per 1 KB item size.
See Write Capacity Units.
In summary: you are only able to make use of 100% of your provisioned throughput if your write or read requests hit all shards in parallel. To do so, you need to distribute your workload among multiple different partition keys.
For example, if you Query a global secondary index and exceed its provisioned read capacity, your request will be throttled. If you perform heavy write activity on the table, but a global secondary index on that table has insufficient write capacity, then the write activity on the table will be throttled.from here: docs.aws.amazon.com/amazondynamodb/latest/developerguide/… - matias elgart