12
votes

(this is a follow up to https://github.com/apollographql/apollo-client/issues/1886)

I'm trying to build a text input that will update the value as the user types.

First attempt

I first tried to use the optimisticResponse to update the local cache as the user types. This works, except that it fires off a mutation on every key stroke. Aside from flooding the network with requests, there is also the problem of the inconsistency of the network. It's possible for the last mutation to arrive first, and the first mutation to arrive last. This results in the server ending up with a stale value. Here is an example of this race condition:

type: a
mutate request: a
type: b
mutate request: ab
arrives on server: ab
arrives on server: a

Now the server has recorded "a" in graphql, which is incorrect.

Adding debounce

To alleviate this, I added a debounce to the keypress event. Although this does help with the race conditions described above, it doesn't solve it. It's still possible to have a race condition if the network is slower than your debounce threshold.

Because we're now debouncing the text input, we need to introduce a local state to that React component so it updates immediately as the user types (as @jbaxleyiii suggested in the github issue). Now our state lives in two place (the component state and the apollo cache).

A big problem with this is that the component won't update when it receives new props. eg. when graphql gets updated and pushes to the client.

Adding network queue

Because debounce doesn't actually solve the race condition, I added a network queue (in addition to the debounce) that will manage the mutation requests to make sure that there is ever only one mutation in flight at a time. If it receives a mutation request while there is one in flight, it will queue it up to be fired when the first one comes back. If there is already a mutation queued, it will discard it and replace it with the new one (there can only ever be one item in the queue at a time). Here's an example:

type: a
send mutate request: a
type: b
queues mutate request: ab     <<  wait to send this until "a" comes back
type: c
replaces queued request: abc  << discard the queued request for "ab", it's old now
response from server: a
send mutate request: abc      << send the queued mutation and clear the queue
response from server: abc

This guarantees that we won't have a race condition (at lease from this client...)

There is a problem with this approach though. The optimisticResponse will only update when a mutation goes out. If a mutation is in flight, we need to wait for the network to return before the update optimisicRespose gets applied. This time could be a long time on slow networks. So, in the example above, we can't use optimisticResponse to update to "abc" until "send mutate request: abc".

This isn't a huge deal, just a delay, but it seems like something we should be able to do.

Attempt to update cache as the user types

In the docs, I learned that I can use withApollo to get access to the client, and update the cache as the user types via writeQuery. This replaces the need for the optimisticResponse. However, a problem arises now when when an old response comes back and updates the cache out from under us. Here's an example:

action                   | cache
-------------------------+------------
type: a                  | a
mutate request: a        | a
type: b                  | ab
queues request: ab       | ab
response from server: a  | a  << oh no!
mutate request: ab       | a  << we're not using optimisticResponse anymore
... network time ...     | a
response from server: ab | ab

I suppose we can use client.writeQuery to update the cache as the user types and the optimisticResponse to update when a mutate request fires, but now this code is now getting pretty hard to follow.

There also might be a way to deal with this in the update function, but I haven't gone that far.

Help?

I'm pretty new to Apollo so maybe I'm missing something. Is there a better way to handle many rapid mutations in apollo?

2

2 Answers

1
votes

Each mutation returns a promise, so you know when an out-of-order mutation arrives, by keeping track of the latest one.

Since you accept anything the user types, that means the user's value is the canonical one, and you don't really need the optimistic response. All you're doing is making sure the server has the same value as you do.

So I'd propose that you track the input in Redux and add a store listener that fires off a mutation (with debounce).

If you do need to keep track of the server value, use a counter to see if the mutation that returned was the last one (store the ++counter value at mutation send, and compare that value to the counter value on mutation return).

0
votes

Found this thread in 2020. I tried to solve the issue with the add debounce approach and it works pretty good so far.

As described in the original question, the main challenge is to keep the component state in sync with the apollo cache. I introduced a useEffect to update local cache on props change. The following snippet is a helper hook I created to edit a task that needs to sync with the server. Please note that mutation must be called with optimisticResponse. When the mutation is triggered, a new task object is returned from the apollo cache. It replaces the local cachedTask. Later on, when the server response comes back, apollo is smart enough to figure out that it is the same object as the optimistic response, so the local cache will not be updated again. If you make changes in the middle of network transaction, your changes will not be overridden thanks to this mechanism.

export function useNewConnectedTask(defaultTask) {
  const [cachedTask, setCachedTask] = useState(defaultTask);
  const [updateTask] = useUpdateTaskMutation();
  useEffect(() => {
    setCachedTask(defaultTask);
  }, [defaultTask]);
  const debouncedUpdateTask = useCallback(
    debounce((task) => {
      updateTask({
        variables: {
          data: task,
        },
        optimisticResponse: {
          __typename: 'Mutation',
          updateTask: task,
        },
      });
    }, 1000), [updateTask],
  );
  const setTask = useCallback(
    (task: TaskFieldsDetailedFragment) => {
      setCachedTask(task);
      debouncedUpdateTask(task);
    }, [setCachedTask, debouncedUpdateTask],
  );

  return [cachedTask, setTask];
}