Create 11 packages.
If it's truly a truncate and reload, that's extract and load, no transformation required. The amount of data in the data buffer (in memory) will be approximately constant for the whole run but it will not allocate enough data to hold all the source data. It'll extract data in a batch size from the source, do whatever transformations are specified - none in your case, and then push it to the destination. If there's more data to be had, the same allocated memory is reused with the subsequent batches.
Your orchestrator pattern then allows you to structure the execution however makes sense. If your concern is minimizing memory, then you likely want the sequence execution.
You could run all packages in sequence - this would result in steady memory usage but a longer total duration.
You could run the packages all in parallel - shorter total duration but a much higher burst of memory usage.
My personal preference is to use a combination of parallel and sequence. I have 3, 5, whatever foreach loop containers that will all run in parallel. They enumerate through the work to be done which I will have segmented beforehand (typically, there's a few big tables and they might be the only member of a processing stream and then lots of little tables could be in another). The purpose of this is to try and get the total processing across all the parallel operators to do a roughly equivalent workload.