As far as I can tell you have three "simple" options.
Option #1: Program that does a Scan
It is fairly simple to write a program that does a (parallel) scan of your table and then outputs the result in a CSV. A no bells and whistles version of this is about 100-150 lines of code in Python or Go.
Advantages:
- Easy to develop
- Can be run easily multiple times from local machines or CI/CD pipelines or whatever.
Disadvantages:
- It will cost you a bit of money. Scanning the whole table will use up some read units. Depending on the amount you are readin, this might get costly fast.
- Depending on the amount of data this can take a while.
Note: If you want to run this in a Lambda then remember that Lambdas can run for a maximum of 15 minutes. So once you more data than can be processed within those 15 minutes, you probably need to switch to Step Functions.
Option #2: Process a S3 backup
DynamoDB allows you to create backups of your table to S3 (as the article describes you linked). Those backups will either be in JSON or a JSON like AWS format. You then can write a program that converts those JSON files to CSV.
Advantages:
- (A lot) cheaper than a scan
Disadvantages:
- Requires more "plumbing" because you need to first create the backup, then do download it from S3 to wherever you want to process it etc.
- Probably will take longer than option #1