0
votes

I am using the service worker to cache requests made ot a rest service. The implementation corresponds to 'on network response' described on this page https://web.dev/offline-cookbook/.

self.addEventListener('fetch', function (event) {
  event.respondWith(
    caches.open('mysite-dynamic').then(function (cache) {
      return cache.match(event.request).then(function (response) {
        return (
          response ||
          fetch(event.request).then(function (response) {
            cache.put(event.request, response.clone());
            return response;
          })
        );
      });
    }),
  );
});

The idea is that:

1. caller makes 1st request A
2. service worker intercepts the request A
3. service worker checks whether a request A is in storage -> no
4. service worker sends the request A to the rest service
5. service worker gets the response A
6. service worker stores the response A
7. service worker returns the response A to the caller
...
    8. caller makes 2nd request A
    9. service worker intercepts the request A
    10. service worker checks whether a request A is in the cache -> yes
    11. service worker gets the response A from storage
    12. service worker returns the response A to the caller

The problem is that the rest request takes some time to return a response and this scenario where 2 or more requests are made to the rest service can occur:

1. caller makes 1st request A
2. service worker intercepts the request A
3. service worker checks whether a request A is in storage -> no
4. service worker sends the request A to the rest service
    5. caller makes 2nd request A
    6. service worker intercepts the request A
    7. service worker checks whether a request A is in storage -> no
8. service worker gets the response A
9. service worker stores the response A
10. service worker returns the response A to the caller
    11. service worker gets the response A
    12. service worker stores the response A
    13. service worker returns the response A to the caller

How to have only 1 request sent?

1
Cache them. Uh, I mean, cache the promise itself (or: memoize the function that serves the request), you just have to do that before the fetch is executed. And you cannot put the promise in the builtin cache, and you'll have to choose the resource identifier (URL?) by which your cache is keyed.Bergi
Thanks for the feedback. The momoization of the reponse promise is working fine. I have implemented it on the caller side for now as I am not sure how to do it in the service worker.vivi

1 Answers

1
votes

If you're open to using the Workbox libraries, instead of "vanilla" service worker code, this recipe might help:

// See https://developers.google.com/web/tools/workbox/guides/using-bundlers
import {NetworkFirst} from 'workbox-strategies';

class DedupeNetworkFirst extends NetworkFirst {
  constructor(options) {
    super(options);
    // This maps inflight requests to response promises.
    this._requests = new Map();
  }

  // _handle is the standard entry point for our logic.
  async _handle(request, handler) {
    let responsePromise = this._requests.get(request.url);

    if (responsePromise) {
      // If there's already an inflight request, return a copy
      // of the eventual response.
      const response = await responsePromise;
      return response.clone();
    } else {
      // If there isn't already an inflight request, then use
      // the _handle() method of NetworkFirst to kick one off.
      responsePromise = super._handle(request, handler);
      this._requests.set(request.url, responsePromise);
      try {
        const response = await responsePromise;
        return response.clone();
      } finally {
        // Make sure to clean up after a batch of inflight
        // requests are fulfilled!
        this._requests.delete(request.url);
      }
    }
  }
}

You could then use the DedupeNetworkFirst class along with Workbox's router, or alternatively, use it directly in your own fetch handler if you'd rather not use more of Workbox.