0
votes

I've a strange behaviour when trying to do some load testing.

Environment :

  • NGINX Ingress controller version: 0.44.0
  • Kubernetes version : 1.17.8
  • openidc.lua version : 1.7.4

Here's the situation :

  • The nginx ingress controller is deployed as daemonset, and due to the openidc module, I activated the sessionAffinity to ClientIP.
  • I have a simple stateless rest service deployed with a basic ingress which is tested for load (no sessionAffinity on that one).

When launching load testing on the rest service without the sessionAffinity ClientIP, I reach far beyond 25 req/s (about 130 req/s before the service resources begin to crash, that's another thing). But with the sessionAffinity activated, I only reach 25 req/s.

After some research, I found some interesting things, desribed like here : https://medium.com/titansoft-engineering/rate-limiting-for-your-kubernetes-applications-with-nginx-ingress-2e32721f7f57

So the formula, as the load test should always be served by the same nginx pod, should be : successful requests = period * rate + burst

So I did try to add the annotation nginx.ingress.kubernetes.io/limit-rps: "100" on my ingress, but no luck, still the same 25 req/s.

I also tried different combinations of the following annotations : https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#rate-limiting, but no luck either.

Am I missing something ?

1
Hello, do I understand correctly that you are seeing issues (performance wise) after changing the sessionAffinity? Is this reproducible when you omit the Ingress controller and just use Service of type LoadBalancer? Is your Kubernetes a cloud-provider managed or is it a self-managed cluster?Dawid Kruk

1 Answers

1
votes

In fact, it was more vicious than that.

It had nothing to do with the sessionAffinity, nor the rate limiting (in fact there's none by default, I didn't get it at first, the rate limit is only there if we want to limit for ddos purpose).

The prob was, I added in the configmap the options for modsecurity AND owasp rules.

And because of that, the request processing was so slow, it limited the number of request per seconds. When the sessionAffinity was not set, I didn't see the prob, as the req/s were fair, as distributed among all pods.

But with the sessionAffinity, so a load test on a single pod, the prob was clearly visible.

So I had to remove modsecurity and owasp, and it'll be the apps who will be responsible for that.

A little sad, as I wanted more central security on nginx so apps don't need to handle it, but not at that cost...

I'd be curious to understand what modsecurity is doing exatly to be so slow.