0
votes

Two issues

  1. Do lua scripts really solve all cases for redis transactions?
  2. What are best practices for asynchronous transactions from one client?

Let me explain, first issue

Redis transactions are limited, with an inability to unwatch specific keys, and all keys being unwatched upon exec; we are limited to a single ongoing transaction on a given client.

I've seen threads where many redis users claim that lua scripts are all they need. Even the redis official docs state they may remove transactions in favour of lua scripts. However, there are cases where this is insufficient, such as the most standard case: using redis as a cache.

Let's say we want to cache some data from a persistent data store, in redis. Here's a quick process:

  1. Check cache -> miss
  2. Load data from database
  3. Store in redis

However, what if, between step 2 (loading data), and step 3 (storing in redis) the data is updated by another client?

The data stored in redis would be stale. So... we use a redis transaction right? We watch the key before loading from db, and if the key is updated somewhere else before storage, storage would fail. Great! However, within an atomic lua script, we cannot load data from an external database, so lua cannot be used here. Hopefully I'm simply missing something, or there is something wrong with our process.

Moving on to the 2nd issue (asynchronous transactions)

Let's say we have a socket.io cluster which processes various messages, and requests for a game, for high speed communication between server and client. This cluster is written in node.js with appropriate use of promises and asynchronous concepts.

Say two requests hit a server in our cluster, which require data to be loaded and cached in redis. Using our transaction from above, multiple keys could be watched, and multiple multi->exec transactions would run in overlapping order on one redis connection. Once the first exec is run, all watched keys will be unwatched, even if the other transaction is still running. This may allow the second transaction to succeed when it should have failed.

These overlaps could happen in totally separate requests happening on the same server, or even sometimes in the same request if multiple data types need to load at the same time.

What is best practice here? Do we need to create a separate redis connection for every individual transaction? Seems like we would lose a lot of speed, and we would see many connections created just from one server if this is case.

As an alternative we could use redlock / mutex locking instead of redis transactions, but this is slow by comparison.

Any help appreciated!

1

1 Answers

0
votes

I have received the following, after my query was escalated to redis engineers:

Hi Jeremy,

Your method using multiple backend connections would be the expected way to handle the problem. We do not see anything wrong with multiple backend connections, each using an optimistic Redis transaction (WATCH/MULTI/EXEC) - there is no chance that the “second transaction will succeed where it should have failed”.

Using LUA is not a good fit for this problem.

Best Regards, The Redis Labs Team