1
votes

I have av VPC with three private subnets (10.176.128.0/20, 10.176.144.0/20, 10.176.160.0/20) and three public subnets (10.176.0.0/20, 10.176.16.0/20, 10.176.32.0/20). All private subnets have the tag kubernetes.io/role/internal-elb=1 and public have the tag kubernetes.io/role/elb=1.

I run all my worker nodes in managed node groups and AWS eks has been responsible for creating a default security group for the cluster. That security group is what I'm referring to later.

I have two namespaces in my kubernetes cluster, test and stage and in each namespace I have 3 services with loadbalancer and they expose 8 ports in each namespace. The loadbalancer is of type nlb.

Now to the problem, each service with a loadbalancer creates 4 rules per port in my security group for my nodes, one for each subnet it's located in and one for all trafic (0.0.0.0/0). 8 * 4 * 2 = 64 and max number of rules per security group is 60 according to AWS, so when I'm about to create the last LB I get the error about RulesPerSecurityGroupLimitExceeded.

Two ways to solve this as I see it, either have more security groups attached to my nodes or somehow config so there are less rules created per port. Thing is that actually one rule with be enough of them since 0.0.0.0/0 would allow all my subnets as well. Another option might be that I'm doing something wrong in the design. The first option to add more security groups I have tried and failed with, still tries to add the rules to the one that is already full.

1

1 Answers

1
votes

We are hitting this issue as well. One thing you can do is request a quota increase on rules per security group in the AWS console. Feels to me like that is only going to postpone the issue slightly though.