1
votes

I have finally got the time to start looking at Azure. It's looks good and easy scaling.

Azure SQL, Table Storage and Blog Storage should cover most of my things. Fast access to data, auto replication and failover to an other datacenter.

Should the idea come for an app that needs fast global access the Traffic manager is there and one can route users for "Fail Over" or "Performance".

The "performance" is very nice for Cloud Services and "Web Roles / Worker Roles" ... BUT ... What about access to data from SQL Azure/Table Storage/Blog Storage.

I have tried searching the web(for what to do about this need), but haven't found anything about the traffic manager that mentions anything about how to access data in such a scenario.

Have I missed anything?

Do people access the storage in the original data center (and if that fails use the Geo Replication feature)? Is that fast enough? Is internal traffic on the MS network free across datacenters?

This seems like such a simple ...

3
Haven't seen too many complaints in their forum about performance.AD.Net
I'm not really worried yet :-). But does people access there storage in the "main" datacenter where the storage is placed? I have not found an answer for this and since I haven't used it yet. I'm looking for best practice on this part.Syska
Btw, internal network traffic is free. I'm not sure about the geo replication feature, i.e. when else it is used other than as a backupAD.Net
I'm sure it's not free since it's a feature you can turn off. But if internal traffic is free(which it might be) I guess access to my main Datacenter is the way to go then.Syska
I meant traffic when retrieving data from database is free.AD.Net

3 Answers

1
votes

Take a look at the guidance by Microsoft: Replicating, Distributing, and Synchronizing Data. You could use the Service Bus to keep data centers in Sync. This can cover SQL Databases, Storage, search indexes like SolR, ElasticSearch, ... The advantage over solutions like SQL Data Sync is that it's technology independent and it can keep virtually all your data in sync:

enter image description here

1
votes

In this episode of Channel 9 they state that Traffic Manager is only for Cloud Services as of now (Jan 2014) but support is coming for Azure Web Sites and other services. I agree that you should be able to ask for a Blob using a single global URL and expect that the content will be served from the closest datacenter.

0
votes

There isn't a one-click easy to implement solution for this issue. The way you solve it will depend on where the data lives (ie. SQL Azure, Blob storage, etc) and your access patterns.

  • Do you have a small number of data requests that are not on a performance critical path in your code? Consider just using the main datacenter.
  • Do you have a large number of read-only type of requests? Consider doing a replication of the data to another datacenter.
  • Do you do a large number of read and only a few write operations? Consider duplicating the data among all datacenters and each write will write to all datacenters at the same time (incurring a perf penalty) and do all reads to the local datacenter (fast reads).
  • Is your data in SQL Azure? Consider using SQL Data Sync to keep multiple datacenters in sync.