(this is a follow up to https://github.com/apollographql/apollo-client/issues/1886)
I'm trying to build a text input that will update the value as the user types.
First attempt
I first tried to use the optimisticResponse
to update the local cache as the user types. This works, except that it fires off a mutation on every key stroke. Aside from flooding the network with requests, there is also the problem of the inconsistency of the network. It's possible for the last mutation to arrive first, and the first mutation to arrive last. This results in the server ending up with a stale value. Here is an example of this race condition:
type: a
mutate request: a
type: b
mutate request: ab
arrives on server: ab
arrives on server: a
Now the server has recorded "a" in graphql, which is incorrect.
Adding debounce
To alleviate this, I added a debounce to the keypress event. Although this does help with the race conditions described above, it doesn't solve it. It's still possible to have a race condition if the network is slower than your debounce threshold.
Because we're now debouncing the text input, we need to introduce a local state to that React component so it updates immediately as the user types (as @jbaxleyiii suggested in the github issue). Now our state lives in two place (the component state and the apollo cache).
A big problem with this is that the component won't update when it receives new props. eg. when graphql gets updated and pushes to the client.
Adding network queue
Because debounce doesn't actually solve the race condition, I added a network queue (in addition to the debounce) that will manage the mutation requests to make sure that there is ever only one mutation in flight at a time. If it receives a mutation request while there is one in flight, it will queue it up to be fired when the first one comes back. If there is already a mutation queued, it will discard it and replace it with the new one (there can only ever be one item in the queue at a time). Here's an example:
type: a
send mutate request: a
type: b
queues mutate request: ab << wait to send this until "a" comes back
type: c
replaces queued request: abc << discard the queued request for "ab", it's old now
response from server: a
send mutate request: abc << send the queued mutation and clear the queue
response from server: abc
This guarantees that we won't have a race condition (at lease from this client...)
There is a problem with this approach though. The optimisticResponse
will only update when a mutation goes out. If a mutation is in flight, we need to wait for the network to return before the update optimisicRespose
gets applied. This time could be a long time on slow networks. So, in the example above, we can't use optimisticResponse
to update to "abc" until "send mutate request: abc".
This isn't a huge deal, just a delay, but it seems like something we should be able to do.
Attempt to update cache as the user types
In the docs, I learned that I can use withApollo
to get access to the client, and update the cache as the user types via writeQuery
. This replaces the need for the optimisticResponse
. However, a problem arises now when when an old response comes back and updates the cache out from under us. Here's an example:
action | cache
-------------------------+------------
type: a | a
mutate request: a | a
type: b | ab
queues request: ab | ab
response from server: a | a << oh no!
mutate request: ab | a << we're not using optimisticResponse anymore
... network time ... | a
response from server: ab | ab
I suppose we can use client.writeQuery
to update the cache as the user types and the optimisticResponse
to update when a mutate request fires, but now this code is now getting pretty hard to follow.
There also might be a way to deal with this in the update
function, but I haven't gone that far.
Help?
I'm pretty new to Apollo so maybe I'm missing something. Is there a better way to handle many rapid mutations in apollo?