1
votes

Working on a project in which the high volume data will be moved from SQL Server (running on an Azure VM) to Azure Table Storage for scaling and cheaper storage reasons. There are several foreign-keys within the data that is being moved to Table Storage, which are GUIDs (Primary-key) in SQL tables. Obviously, there is no means to ensure referential integrity, as transactions do not span different Azure storage types. I would like to know if anyone has had any success with this storage design. Are there any transaction management solutions that allow transactions to be created that span SQL Server and Azure Table Storage? What are the implications of having queries that read from both databases (SQL and Table Storage)?

1

1 Answers

1
votes

If you're trying to perform a transaction that spans both SQL Server and Azure Tables, your best bet will be using the eventually consistent transaction pattern.

In a nutshell, you will put your updates into a queue message, then have a worker process (whether it be a WebJob, Worker Role, something running on your VM) dequeue the message with peeklock, make sure that all steps within the transaction are executed, then call complete on the message.

If you're looking to perform a transaction on just an Azure Table, you can do so with batch updates as long as your entities live within the same partition.