8
votes

I'd like to use an HTTP proxy (such as nginx) to cache large/expensive requests. These resources are identical for any authorized user, but their authentication/authorization needs to be checked by the backend on each request.

It sounds like something like Cache-Control: public, max-age=0 along with the nginx directive proxy_cache_revalidate on; is the way to do this. The proxy can cache the request, but every subsequent request needs to do a conditional GET to the backend to ensure it's authorized before returning the cached resource. The backend then sends a 403 if the user is unauthorized, a 304 if the user is authorized and the cached resource isn't stale, or a 200 with the new resource if it has expired.

In nginx if max-age=0 is set the request isn't cached at all. If max-age=1 is set then if I wait 1 second after the initial request then nginx does perform the conditional GET request, however before 1 second it serves it directly from cache, which is obviously very bad for a resource that needs to be authenticated.

Is there a way to get nginx to cache the request but immediately require revalidating?

Note this does work correctly in Apache. Here are examples for both nginx and Apache, the first 2 with max-age=5, the last 2 with max-age=0:

# Apache with `Cache-Control: public, max-age=5`

$ while true; do curl -v http://localhost:4001/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cache: MISS from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: HIT from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: HIT from 172.x.x.x

# nginx with `Cache-Control: public, max-age=5`

$ while true; do curl -v http://localhost:4000/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cached: MISS
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: HIT
< X-Cached: REVALIDATED
< X-Cached: HIT
< X-Cached: HIT

# Apache with `Cache-Control: public, max-age=0`
# THIS IS WHAT I WANT

$ while true; do curl -v http://localhost:4001/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cache: MISS from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x
< X-Cache: REVALIDATE from 172.x.x.x

# nginx with `Cache-Control: public, max-age=0`

$ while true; do curl -v http://localhost:4000/ >/dev/null 2>&1 | grep X-Cache; sleep 1; done
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS
< X-Cached: MISS

As you can see in the first 2 examples the requests are able to be cached by both Apache and nginx, and Apache correctly caches even max-age=0 requests, but nginx does not.

3
Can you change backend logic?Dmitry MiksIr
great question! i think X-Accel-Redirect is what you're looking for! good luck!cnst
Yeh, I thought about X-Accel-Redirect too. Cons: 2 request to backend for each front request. Pros: simple nginx config and splitting backend logic.Dmitry MiksIr
@DmitryMiksIr, not true — if the request is really static, then it should probably be served directly by nginx (from an internal location), bypassing the whole proxy logic, and not trashing/duplicating filesystem-level cache; and even if it's still served from the backend, it'll still be cached just once; in fact, X-Accel-Redirect is just overall a more flexible and understandable approach — the whole "revalidate" logic is just an accident waiting to happen, whereas w/ X-Accel-Redirect you don't even have to worry about troubleshooting extra cache and performance issues etc.cnst
The resource is not static, but I would like to cache it for some period of time.tlrobinson

3 Answers

3
votes

I would like to address the additional questions / concerns that have come up during the conversation since my original answer of simply using X-Accel-Redirect (and, if Apache-compatibility is desired, X-Sendfile, respectively).

The solution that you seek as "optimal" (without X-Accel-Redirect) is incorrect, for more than one reason:

  1. All it takes is a request from an unauthenticated user for your cache to be wiped clean.

    • If every other request is from an unauthenticated user, you effectively simply have no cache at all whatsoever.

    • Anyone can make requests to the public URL of the resource to keep your cache wiped clean at all times.

  2. If the files served are, in fact, static, then you're wasting extra memory, time, disc and vm/cache space for keeping more than one copy of each file.

  3. If the content served is dynamic:

    • Is it the same constant cost to perform authentication as resource generation? Then what do you actually gain by caching it when revalidation is always required? A constant factor less than 2x? You might as well not bother with caching simply to tick a checkmark, as real-world improvement would be rather negligible.

    • Is it exponentially more expensive to generate the view than to perform authentication? Sounds like a good idea to cache the view, then, and serve it to tens of thousands of requests at peak time! But for that to happen successfully you better not have any unauthenticated users lurking around (as even a couple could cause significant and unpredictable expenses of having to regen the view).

  4. What happens with the cache in various edge-case scenarios? What if the user is denied access, without the developer using appropriate code, and then that gets cached? What if the next administrator decides to tweak a setting or two, e.g., proxy_cache_use_stale? Suddenly, you have unauthenticated users receiving privy information. You're leaving all sorts of cache poisoning attack vectors around by needlessly joining together independent parts of your application.

  5. I don't think it's technically correct to return Cache-Control: public, max-age=0 for a page that requires authentication. I believe the correct response might be must-revalidate or private in place of public.

The nginx "deficiency" on the lack of support for immediate revalidation w/ max-age=0 is by design (similarly to its lack of support for .htaccess). As per the above points, it makes little sense to immediately require re-validation of a given resource, and it's simply an approach that doesn't scale, especially when you have a "ridiculous" amount of requests per second that must all be satisfied using minimal resources and under no uncertain terms. If you require a web-server designed by a "committee", with backwards compatibility for every kitchen-sink application and every questionable part of any RFC, nginx is simply not the correct solution.

On the other hand, X-Accel-Redirect is really simple, foolproof and de-facto standard. It lets you separate content from access control in a very neat way. It's dead simple. It actually ensures that your content will be cached, instead of your cache be wiped out clean willy-nilly. It is the correct solution worth pursuing. Trying to avoid an "extra" request every 10K servings during the peek time, at the price of having only "one" request when no caching is needed in the first place, and effectively no cache when the 10K requests come by, is not the correct way to design scalable architectures.

0
votes

I think your best bet would be to modify your backend with support of X-Accel-Redirect.

Its functionality is enabled by default, and is described in the documentation for proxy_ignore_headers:

“X-Accel-Redirect” performs an internal redirect to the specified URI;

You would then cache said internal resource, and automatically return it for any user that has been authenticated.

As the redirect has to be internal, there would not be any other way for it to be accessed (e.g., without an internal redirect of some sort), so, as per your requirements, unauthorised users won't be able to access it, but it could still be cached just as any other location.

0
votes

If you are unable to modify the backend app as suggested or if the authentication is straightforward such as auth basic, an alternative approach would be to carry out the authentication in Nginx.

Implementing this auth process and defining the cache validity period would be all you would have to do and Nginx will take care of the rest as per the process flow below

Nginx Process Flow as Pseudo Code:

If (user = unauthorised) then
    Nginx declines request;
else
    if (cache = stale) then
        Nginx gets resource from backend;
        Nginx caches resource;
        Nginx serves resource;
    else 
        Nginx gets resource from cache;
        Nginx serves resource;
    end if
end if

Con is that depending on the auth type you have, you might need something like the Nginx Lua module to handle the logic.

EDIT

Seen the additional discussions and information given. Now, not fully knowing about how the backend app works but looking at the example config the user anki-code gave on GitHub which you commented on HERE, the config below will avoid the issue you raised of backend app's authentication/authorization checks not being run for previously cached resources.

I assume the backend app returns a HTTP 403 code for unauthenticated users. I also assume that you have the Nginx Lua module in place since the GitHub config relies on this although I do note that the part you tested does not need that module.

Config:

server {
    listen 80;
    listen [::]:80;
    server_name 127.0.0.1;
    
    location / {
        proxy_pass http://127.0.0.1:3000; # Metabase here
    }
    location ~ /api/card((?!/42/|/41/)/[0-9]*/)query {
        access_by_lua_block {
            -- HEAD request to a location excluded from caching to authenticate
            res = ngx.location.capture( "/api/card/42/query", { method = ngx.HTTP_HEAD } )
            if res.status = 403 then
                return ngx.exit(ngx.HTTP_FORBIDDEN)
            else
                ngx.exec("@metabase")
            end if
        }
    }
    
    location @metabase {
        # cache all cards data without card 42 and card 41 (they have realtime data)
        if ($http_referer !~ /dash/){ 
            #cache only cards on dashboard
            set $no_cache 1;
        }
        proxy_no_cache $no_cache;
        proxy_cache_bypass $no_cache;
        proxy_pass http://127.0.0.1:3000;
        proxy_cache_methods POST;
        proxy_cache_valid 8h;
        proxy_ignore_headers Cache-Control Expires;
        proxy_cache cache_all;
        proxy_cache_key "$request_uri|$request_body";
        proxy_buffers 8 32k;
        proxy_buffer_size 64k;
        add_header X-MBCache $upstream_cache_status;
    }
    location ~ /api/card/\d+ {
        proxy_pass http://127.0.0.1:3000;
        if ($request_method ~ PUT) {
            # when the card was edited reset the cache for this card
            access_by_lua 'os.execute("find /var/cache/nginx -type f -exec grep -q \\"".. ngx.var.request_uri .."/\\"  {} \\\; -delete ")';
            add_header X-MBCache REMOVED;
        }
    }
}

With this, I'll expect that the test with $ curl 'http://localhost:3001/api/card/1/query' will run as follows:

First Run (With Required Cookie)

  1. Request Hits location ~ /api/card((?!/42/|/41/)/[0-9]*/)query
  2. In Nginx Access Phase, a "HEAD" sub-request is issued to /api/card/42/query. This location is excluded from caching in the config given.
  3. The backend app returns a non 403 etc response since the user is authenticated.
  4. A sub-request is then issued to the @metabase named location block which handles the actual request and returns the content to the user.

Second Run (Without Required Cookie)

  1. Request Hits location ~ /api/card((?!/42/|/41/)/[0-9]*/)query
  2. In Nginx Access Phase, a "HEAD" sub-request is issued to the backend at /api/card/42/query.
  3. The backend app returns 403 Forbidden response since the user is not authenticated
  4. The user's client gets a 403 Forbidden response.

Instead of /api/card/42/query, if resource intensive, you may be able to create a simple card query that will simply be used to do the auth.

Seems a straightforward way to go about it. The backend stays as it is without messing about with it and you configure your caching details in Nginx.