1
votes

I'm using Redis Cache in an azure website. The Cache is hosted in Azure. I was noticing some timeouts when setting values to the cache coming through our monitoring. So I ran some load tests that I'd run before I'd moved from the local server cache to using redis and the results were pretty bad compared to the previous test runs mostly caused by timeouts to redis cache.

I'm using the StackExchange.Redis library version 1.0.333 strong name version.

I was careful not to create a new connection each time I access the cache.

The load test is not actually loading the server up that much and results were 100% successful previously and now get about 50% error rate caused by timeouts.

Code being used to access the cache.

 public static class RedisCacheProvider
{
    private static ConnectionMultiplexer connection;
    private static ConnectionMultiplexer Connection
    {
        get
        {
            if (connection == null || !connection.IsConnected)
            {
                connection = ConnectionMultiplexer.Connect(ConfigurationManager.ConnectionStrings["RedisCache"].ToString());
            }
            return connection;
        }
    }

    private static IDatabase Cache
    {
        get
        {
            return Connection.GetDatabase();
        }
    }


    public static T Get<T>(string key)
    {
        return Deserialize<T>(Cache.StringGet(key));
    }

    public static object Get(string key)
    {
        return Deserialize<object>(Cache.StringGet(key));
    }

    public static void Set(string key, object value)
    {
        Cache.StringSet(key, Serialize(value));
    }

    public static void Remove(string key)
    {
        Cache.KeyDelete(key);
    }

    public static void RemoveContains(string contains)
    {
        var endpoints = Connection.GetEndPoints();
        var server = Connection.GetServer(endpoints.First());
        var keys = server.Keys();
        foreach (var key in keys)
        {
            if (key.ToString().Contains(contains))
                Cache.KeyDelete(key);
        }
    }

    public static void RemoveAll()
    {
        var endpoints = Connection.GetEndPoints();
        var server = Connection.GetServer(endpoints.First());
        server.FlushAllDatabases();
    }

    static byte[] Serialize(object o)
    {
        if (o == null)
        {
            return null;
        }

        BinaryFormatter binaryFormatter = new BinaryFormatter();
        using (MemoryStream memoryStream = new MemoryStream())
        {
            binaryFormatter.Serialize(memoryStream, o);
            byte[] objectDataAsStream = memoryStream.ToArray();
            return objectDataAsStream;
        }
    }

    static T Deserialize<T>(byte[] stream)
    {
        if (stream == null)
        {
            return default(T);
        }

        BinaryFormatter binaryFormatter = new BinaryFormatter();
        using (MemoryStream memoryStream = new MemoryStream(stream))
        {
            T result = (T)binaryFormatter.Deserialize(memoryStream);
            return result;
        }
    }

}
2

2 Answers

3
votes

I have had the same issue recently.

A few points that will improve your situation:

Protobuf-net instead of BinaryFormatter

I recommend using protobuf-net as it will reduce the size of values that you want to store in your cache.

public interface ICacheDataSerializer
    {
        byte[] Serialize(object o);
        T Deserialize<T>(byte[] stream);
    }

public class ProtobufNetSerializer : ICacheDataSerializer
    {
        public byte[] Serialize(object o)
        {
            using (var memoryStream = new MemoryStream())
            {
                Serializer.Serialize(memoryStream, o);

                return memoryStream.ToArray();
            }
        }

        public T Deserialize<T>(byte[] stream)
        {
            var memoryStream = new MemoryStream(stream);

            return Serializer.Deserialize<T>(memoryStream);
        }
    }

Implement retry strategy

Implement this RedisCacheTransientErrorDetectionStrategy to handle timeout issues.

using Microsoft.Practices.TransientFaultHandling;

public class RedisCacheTransientErrorDetectionStrategy : ITransientErrorDetectionStrategy
    {
        /// <summary>
        /// Custom Redis Transient Error Detenction Strategy must have been implemented to satisfy Redis exceptions.
        /// </summary>
        /// <param name="ex"></param>
        /// <returns></returns>
        public bool IsTransient(Exception ex)
        {
            if (ex == null) return false;

            if (ex is TimeoutException) return true;

            if (ex is RedisServerException) return true;

            if (ex is RedisException) return true;

            if (ex.InnerException != null)
            {
                return IsTransient(ex.InnerException);
            }

            return false;
        }
    }

Instantiate like this:

private readonly RetryPolicy _retryPolicy;

// CODE
var retryStrategy = new FixedInterval(3, TimeSpan.FromSeconds(2));
            _retryPolicy = new RetryPolicy<RedisCacheTransientErrorDetectionStrategy>(retryStrategy);

Use like this:

var cachedString = _retryPolicy.ExecuteAction(() => dataCache.StringGet(fullCacheKey));

Review your code to minimize cache calls and values that you are storing in your cache. I reduced lots of errors by storing values more efficiently.

If none of this helps. Move to higher cache (we ended up using C3 instead of C1).

enter image description here

1
votes

You should not create a new ConnectionMultiplexer if IsConnected is false. The existing multiplexer will reconnect in the background. By creating a new multiplexer and not disposing the old one, you are leaking connections. We recommend the following pattern:

private static Lazy<ConnectionMultiplexer> lazyConnection =
    new Lazy<ConnectionMultiplexer>(() => {
        return ConnectionMultiplexer.Connect(
            "mycache.redis.cache.windows.net,abortConnect=false,ssl=true,password=...");
    });

public static ConnectionMultiplexer Connection {
    get {
        return lazyConnection.Value;
    }
}

You can monitor the number of connections to your cache in the Azure portal. If it seems unusually high, this may be what is impacting your performance.

For further assistance, please contact us at '[email protected]'.