1
votes

I have a Google Cloud Storage (GCS) bucket marked as public. I use v4 signed URLs to upload images to this public bucket. I can see via HTTP headers that GCS caches images for up to 1 hour. However, when I upload newer versions of the same images (having the same file names), GCS does not invalidate the cache and still serves the older versions of these images. How do I ensure cache invalidation when newer versions of these images are uploaded to GCS? Has to work with v4 signed URLs.

1

1 Answers

1
votes

According to my understanding GCS is not caching your images, but is setting default caching policy which is later used by local browser or internet caches.

The default policy (called "build-in caching") for public GCS buckets is indeed 3600 seconds (reference).

So if the policy is set to 1h than, if local browser (or other feature with cache) will get such files, it will be serving it for that time (please check this article).

If I understand correctly, the requirement is to force the cache to upload data whenever new version is uploaded to GCS. And this unfortunately seems to be not possible, as cache is not in GCS but in local browser of the user or somewhere in the internet, out of GCS control.

At this point I think you have to decide if you want to set object to not cache at all, and than new object version will be downloaded at once with all related consequences (app performance, GCS usage price etc.) or accept that for some the app will serve old versions of the content.

To setup cache-control for GCS please refer to this link. Check performance consideration as well.