3
votes

We experienced some stale Datastore data in our Python Google AppEngine application. I inspected the log and saw the following warning in the requests that were supposed to update the respective data:

Memcache set_multi() error: [':part', ':full']

The log entry was produced after an ndb.put(). There were no exceptions raised, only this silent log output. However, the model was not written to the Datastore. This happened repeatedly for 4 times.

To be accurate, I am not 100% sure if the log was produced during the put() of my model or afterwards, while GAE is saving appstats for that particular request. In addition, this log says that our memcache is full, I don't clearly get it as a problem (caches are expected to get full from time to time, right?).

Yet, in all cases that this log was produced, the put() did not write data to the Datastore and I cannot identify why this happened. If the ndb.put() failed, I would expect some kind or error/exception raised (my code handles these), but the warning was silent.

Any suggestions?

1
In the docs The return value is a list of keys whose values were NOT set. On total success, this list should be empty. Ok, so could be the values for those keys are not pickle-able? Also, NDB uses atomic transactions, which could include needing the write to memcache to work so the whole saving action can work. - belteshazzar
The memcache part is supposed to be transparent behind the ndb API. I am not using memcache explicitly. I 'm only using ndb and AFAIK ndb.put() doc does not state what would happen if anything fails. - Christos
You don't need to be using it explicitly, but it is being used, and if it's wrapped up in an atomic operation, then the saving op wont work. Try disabling the cache for a bit cloud.google.com/appengine/docs/python/ndb/… The atomic transaction then wont need a cache write to complete. It should then save the data. If it works, then find ways to make sure the cache doesn't fill up, and re-enable caching. - belteshazzar
Hmm, I think I get what you are saying. But essentially, that means I would have to re-write what ndb is meant to do transparently, and probably with lot's of extra effort (check memcache capacity before each write, and so on...) - Christos
That specific error comes from appstats -- ndb does not log that error anywhere in its call path. ndb will error if the Datastore put fails. When you say that the data does not get written to the Datastore, how are you checking this? Is it with a get? A query? - Patrick Costello

1 Answers

1
votes

You can turn off memcache in the NDB Context class. This SO answer shows how to enable/disable memcache: ndb Models are not saved in memcache when using MapReduce

This code disables all caching:

ndb_ctx = ndb.get_context()
ndb_ctx.set_cache_policy(lambda key: False)
ndb_ctx.set_memcache_policy(lambda key: False)