1
votes

I want to get the count of "alive" keys at any given time. Now according to the API documentation getItemCount() is meant to return this.

However it's not. Expired keys are not reducing the value of getItemCount(). Why is this? How can I accurately get a count of all "active" or "alive" keys that have not expired?

Here's my put code; syncCache.put(uid, cachedUID, Expiration.byDeltaSeconds(3), SetPolicy.SET_ALWAYS);

Now that should expire keys after 3 seconds. It expires them but getItemCount() does not reflect the true count of keys.

UPDATE: It seems memcache might not be be what I should be using so here's what I'm trying to do.

I wish to write a google-app engine server/app that works as a "users online" feature for a desktop application. The desktop application makes a http request to the app with a unique ID as a paramater. The app stores this UID along with a timestamp. This is done every 3 minutes.

Every 5 mninute any entries that have a timestamp outside of that 5 minute window are removed. Then you count how many entries you have and that's how many users are "online".

The expire feature seemed perfect as then I wouldn't even need to worry about timestamps or clearing expired entries.

2
What does getItemCount show immediately before you put the item into the cache and immediately after? Is the item definitely being added? - matt freake
getItemCount increments fine, and displaying correctly when creating a new key. However it should be decrementing when keys expire if I understand correctly? - JasonF

2 Answers

0
votes

It might be a problem in the documentation, the python one does not mention anything about alive ones. This is reproducible also in python.

See also this related post How does the lazy expiration mechanism in memcached operate?.

0
votes

getItemCount() might return expired keys because it's the way memcache works and many other caches too.

Memcache can help you to do what you describe but not in the way you tried to do it. Consider completely opposite situation: you put online users in memcache and then appengine wipe them out from memcache because of lack of free memory. cache doesn't give you any guaranties you items will be stored any particular period. So memcache can let you decrease number of requests to datastore and latency.

One of the ways to do it: Maintain sorted map (user-id, last-login-refreshed) entries which stored in datastore (or in memcache if you do not need it to be very precise). on every login/refresh you update value by particular key, and from periodic cron-job remove old users from the map. So size of map would be number of logged in use at particular moment.

Make sure map will fit in 1 Mb which is limit for both memcache or datastore.