2
votes

We have a service with multiple replicas which works with storage without transactions and blocking approaches. So we need somehow to synchronize concurrent requests between multiple instances by some "sharding" key. Right now we host this service in Kubernetes environment as a ReplicaSet.

Don't you know any simple out-of-the-box approaches on how to do this to not implement it from scratch?

Here are several of our ideas on how to do this:

  1. Deploy the service as a StatefulSet and implement some proxy API which will route traffic to the specific pod in this StatefulSet by sharding key from the HTTP request. In this scenario, all requests which should be synchronized will be handled by one instance and it wouldn't be a problem to handle this case.

  2. Deploy the service as a StatefulSet and implement some custom logic in the same service to re-route traffic to the specific instance (or process on this exact instance). As I understand it's not possible to have abstract implementation and it would work only in Kubernetes environment.

  3. Somehow expose each pod IP outside the cluster and implement routing logic on the client-side.

  4. Just implement synchronization between instances through some third-party service like Redis.

I would like to try to route traffic to the specific pods. If you know standard approaches how to handle this case I'll be much appreciated.

Thank you a lot in advance!

2

2 Answers

2
votes

Another approach would be to put a messaging queue (like Kafka and RabbitMq) in front of your service. Then your pods will subscribe to the MQ topic/stream. The pod will decide if it should process the message or not.

Also, try looking into service meshes like Istio or Linkerd. They might have an OOTB solution for your use-case, although I wasn't able to find one.

1
votes

Remember that Network Policy is not traffic routing !

Pods are intended to be stateless and indistinguishable from one another, pod-networking.

I recommend to Istio. It has special component which is responsible or routing- Envoy. It is a high-performance proxy developed in C++ to mediate all inbound and outbound traffic for all services in the service mesh.

Useful article: istio-envoy-proxy.

Istio documentation: istio-documentation.

Useful Istio explaination https://www.youtube.com/watch?v=e2kowI0fAz0.

But you should be able to create a Deployment per customer group, and a Service per Deployment. The Ingress nginx should be able to be told to map incoming requests by whatever attributes are relevant to specific customer group Services.

Other solution is to use kube-router.

Kube-router can be run as an agent or a Pod (via DaemonSet) on each node and leverages standard Linux technologies iptables, ipvs/lvs, ipset, iproute2.

Kube-router uses IPVS/LVS technology built in Linux to provide L4 load balancing. Each ClusterIP, NodePort, and LoadBalancer Kubernetes Service type is configured as an IPVS virtual service. Each Service Endpoint is configured as real server to the virtual service. The standard ipvsadm tool can be used to verify the configuration and monitor the active connections.

How it works: service-proxy.