I was going over the AWS blog and from there the AWS re:Invent video to understand DynamoDB's concept of adaptive scaling and bursts. I understand the concept of WCU and RCU and the idea of burst buckets piling up to a period of 300 seconds and that the peak WCU/RCU of a partition is 1000/3000.
Starting time instant 23:00 of the video:
(1) The video gives an example of a table provisioned with 500 WCUs and 1500 RCUs. The table has 1 partition. The burst bucket after 5 minutes of "assumed" inactivity becomes 500*300=150K WCUs and 1500*300= 450K RCUs. And from there it is concluded that this table now with this burst bucket ready can offer up to 5 minutes of sustained 1K WCU or 3K RCU with one partition
How are we arriving at this figure of "up to 5 minutes of sustained 1K WCU or 3K RCU"? In my understanding this should have been the calculation for RCU: 450K/3K=150 seconds
(2) Further down, after the table splits into two partitions with the provisioned WCU/RCU getting evenly divided among the 2 partitions @250/750 respectively. With 2 partitions, the peak RCU doubles to 6K and peak WCU doubles to 2K and therefore they conclude that the same burst bucket now offers up to 100 seconds of sustained 2K WCU or 6K RCU with two partitions.
How are we arriving at this figure of "up to 100 seconds of sustained 2K WCU or 6K RCU"? In my understanding this should have been the calculation for RCU: 450K/6K=75 seconds
Any DynamoDB experts who could help derive this figure?