3
votes

We are currently in the process of migrating our servers to Windows Azure and we want to take advantage of Windows Azure Shared Caching.

We have written a provider for the caching, so we can switch it on and off (over to runtime cache if needed), but we have found that the cache is around 1000 times slower than runtime cache.

For example, returning a Website object from Runtime Cache is Total Time 0.0053 where Azure Shared Caching is Total Time 73.6638, in some cases Total Time 439.3367

Don't get me wrong, I expect there to be a small network lag but this is stupid? It's completely unusable?

The object is tiny, the total size of the cache is 0.8mb so the Website object is small.

Does anyone have any suggestions? Should I be using the Caching preview method I have seen where it is semi dedicated? Surely they wouldn't have this shared one if it was unusable like this?

I have seen about local caching working on top of the network caching but there is still going to be latency when the cache rebuilds locally?

Can anyone offer any advice?

The following conditions have been met

We are working with the latest version of the Azure DLLs

We are testing with a website that is in the same data centre as the Azure Cache

SQL Azure is in the same data centre as the instance, so data retrieval is not an issue

I am the only person accessing this website at current so I don't believe concurrency is an issue

2
The provider is a provider for Azure. It's simply a mechanism to quickly switch between Azure and runtime cache. We have customers that don't have access to Azure cache so we put a config setting in to switch it on and off selectively. - Chris Lomax
Sorry, I think you misunderstand. I am using the Azure Shared Caching. I am not using the Blob storage or Table storage in this example but rather their version of AppFabric. I don't know if that helps any? - Chris Lomax
Thanks, I don't think I'm going to get anywhere with it to be honest. I've resorted to setting up two dedicated instances for caching now. What's annoyed me is that I have memcache setup on two machines in our current configuration and I wrote a class to implement AppFabric as they have the service available and it looks like it's just rubbish! They have a semi dedicated cache web role but that is specific to the web worker and roles setup, I am using virtual machines and can't find documentation on connecting to it! The joys of working with tech still in preview status. Thanks anyway, Chris - Chris Lomax
I am just going to delete my comments as they are not going to be of any value to others. - paparazzo
For those interested in an outcome. I could not get the Shared Caching running any quicker unless I switched on Local Caching too. This is a timeout cache so it's reactive instead of proactive. Proactive caching is not available in Shared Caching so ended up deploying two dedicated cache instances with full AppFabric caching on them. Actually works out cheaper per GB storage with no restrictions, you just have to set it up yourself. - Chris Lomax

2 Answers

0
votes

Apologies for the issues you faced. Windows Azure shared caching was released previously and is running in production. It should give you good latencies as long as you are deployed in same data center as the cache you have provisioned. What was your topology, where was the cache and where were you hitting it from ?

As far as preview caching goes, it would give you better latency and more control on your cache since you are the one running it. Since your app was written for memcache, you maybe interested in using our memcache shim too.

You can read about protocol support and other steps here :

http://msdn.microsoft.com/en-us/library/windowsazure/hh914167.aspx

Regarding "I am using virtual machines and can't find documentation on connecting to it!" , what were you trying to do there ?

0
votes

" "

Did you check mark the enable caching in role properties ? Where did the client fail (in the stack) ?

Regarding lag times, what values are you seeing on dedicated caching, its so easy to setup that it would be criminal to not try :)

Storing smaller values is always a plus from caching point of view, not only saves on transport and storage overhead but also saves on serialization and deserialization costs. Have you considered using local cache on dedicated caching ? If latency is your main pain, nothing can usually beat local cache.

If you are trying memcache scenario, this might also be worth looking at :

http://blogs.msdn.com/b/silverlining/archive/2012/10/09/using-memcache-to-access-a-windows-azure-dedicated-cache.aspx