1
votes

Two sql scripts (update/rollback) are created for each version in current projects. We would like to migrate to DACPAC solution.

Each DACPAC project creates 1 file dacpac at the en so for each version, I create 2 project (1 for update and 1 for rollback). The schema changes will be in dacpac itself while pre-script and post-script are for data migration.

To add a new version, I copy the current update project into new update project and new rollback project. Then modify from there.

Any thoughts please?

2

2 Answers

3
votes

I guess this comes down to whether you need to actually do all this work, the way I work with SSDT is to define what I want the current version to look like in the schema + code and any upgrade scripts I need go into the post-deploy files as re-runnable scripts (idempotent).

If a db is on version 1 or 100 they still get the same post-deploy script but either the script checks for the data or a flag that is stored in the database to say that particular script has already been run - that should be pretty easy to setup but depends on your exact requirements.

To keep this manageable it is a good idea to know when particular scripts have gone to all of your environments and then remove them so you only actually have the post-deploy scripts that you are ever going to actually need.

That being said, there are sometimes specific restrictions which are normally:

  • Customers managing databases so you can't be sure what version they have
  • Regulatory (perceived or otherwise) requirements demanding upgrade/rollback scripts

Firstly find out how set in stone your requirements are, the goal of SSDT is to not have to worry about upgrade / downgrade scripts (certainly for the schema) - the questions I would ask are:

  • is it enough to take a backup or snapshot before the deploy?
  • can you leave off downgrade scripts and write them should you ever need to?
  • how often are the rollback scripts ever used (also knowing the first two can help here)

If you have a good suite of tests, an automated deployment pipeline (even if it has a manual DBA step in at some point) then these issues become less important and over time as everyone learns to trust the process can become significantly faster and easier to deploy changes.

What are your restrictions?

Ed

2
votes

If you find that you're investing a fair amount of effort putting logic into a post-deployment script, the chances are that a migrations-based approach (and not the state-based approach) is more suited for you.

Examples are DBUp (open source), ReadyRoll (this is commercial and the one we develop here at Redgate - has additional features such as auto-generation of scripts integration with VS etc).

Migrations-based tools manage the versions (including the table Ed is referring to) on your behalf.