I guess this comes down to whether you need to actually do all this work, the way I work with SSDT is to define what I want the current version to look like in the schema + code and any upgrade scripts I need go into the post-deploy files as re-runnable scripts (idempotent).
If a db is on version 1 or 100 they still get the same post-deploy script but either the script checks for the data or a flag that is stored in the database to say that particular script has already been run - that should be pretty easy to setup but depends on your exact requirements.
To keep this manageable it is a good idea to know when particular scripts have gone to all of your environments and then remove them so you only actually have the post-deploy scripts that you are ever going to actually need.
That being said, there are sometimes specific restrictions which are normally:
- Customers managing databases so you can't be sure what version they have
- Regulatory (perceived or otherwise) requirements demanding upgrade/rollback scripts
Firstly find out how set in stone your requirements are, the goal of SSDT is to not have to worry about upgrade / downgrade scripts (certainly for the schema) - the questions I would ask are:
- is it enough to take a backup or snapshot before the deploy?
- can you leave off downgrade scripts and write them should you ever need to?
- how often are the rollback scripts ever used (also knowing the first two can help here)
If you have a good suite of tests, an automated deployment pipeline (even if it has a manual DBA step in at some point) then these issues become less important and over time as everyone learns to trust the process can become significantly faster and easier to deploy changes.
What are your restrictions?
Ed