0
votes

I'm using a ReplicaSet to manage my pods and I try to expose these pods with a service. The Pods created by a ReplicaSet have randomized names.

NAME                   READY   STATUS    RESTARTS   AGE
master                 2/2     Running   0          20m
worker-4szkz           2/2     Running   0          21m
worker-hwnzt           2/2     Running   0          21m

I try to expose these Pods with a Service, since some policies restrict me to use hostNetwork=true. I'm able to expose them by creating a NodePort service for each Pod with kubectl expose pod worker-xxxxx --type=NodePort.

This is clearly not a flexible way. I wonder how to create a Service (LoadBalancer type maybe?) to access to all the replicas dynamically in my ReplicaSet. If that comes with a Deployment that would be perfect too.

Thanks for any help and advice!

Edit:

I put a label on my ReplicaSet and a NodePort type Service called worker selecting that label. But I'm not able to ping worker in any of my pods. What's the correct way of doing this?

Below is how the kubectl describe service worker gives. As the Endpoints show the pods are picked up.

Name:                     worker
Namespace:                default
Annotations:              <none>
Selector:                 tag=worker
Type:                     NodePort
IP:                       10.106.45.174
Port:                     port1  29999/TCP
TargetPort:               29999/TCP
NodePort:                 port1  31934/TCP
Endpoints:                10.32.0.3:29999,10.40.0.2:29999
Port:                     port2  29996/TCP
TargetPort:               29996/TCP
NodePort:                 port2  31881/TCP
Endpoints:                10.32.0.3:29996,10.40.0.2:29996
Port:                     port3  30001/TCP
TargetPort:               30001/TCP
NodePort:                 port3  31877/TCP
Endpoints:                10.32.0.3:30001,10.40.0.2:30001
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>
1
Did you try adding a label to your deployment, something line app: worker, and creating a service with selector app: worker? That service should act like a load balancer in front of all the pods with that label. - Burak Serdar
@bserdar I created a type NodePort Service with the matching selectors and the Service is showing the worker Pods in Endpoints. But I'm not able to ping worker in master (worker is the Service name). Do you know am I doing wrong? Thank you! - OrlandoL
That service name will not be visible outside the cluster. The name should be visible to all the pods in the cluster. Since it is a node port, any worker should have that port open, and you can access it using workernode:port. If you need to expose that service outside the cluster, you either need an ingress, or you need an external load balancer pointing to all workernode:port nodes. - Burak Serdar
@bserdar Sorry I edited the kubectl get pods output. The master is a pod in k8s cluster so I expect it to be able to talk to the worker pods through that service. I added an edit of the service port forwarding. Am I not ping-ing the correct ports? Thanks! - OrlandoL
What kube-proxy mode are you using, iptables or ipvs? If it is iptables mode, the service is not pingable since the iptables rule doesn't have a match for icmp traffic. Try telnet worker 29999 - Hang Du

1 Answers

2
votes

I Believe that you can optimize this a bit by using Deployments instead of ReplicaSets (This is now the standard way), i.e you could have a deployment as follows:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: nginx-deployment
  labels:
    app: nginx
spec:
  replicas: 3
  selector:
    matchLabels:
      app: nginx
  template:
    metadata:
      labels:
        app: nginx
    spec:
      containers:
      - name: nginx
        image: nginx:1.7.9
        ports:
        - containerPort: 80

Then your service to match this would be:

apiVersion: v1
kind: Service
metadata:
  name: nginx-service
spec:
  # This is the important part as this is what is used to route to 
  # the pods created by your deployment
  selector:
    app: nginx
  ports:
    - protocol: TCP
      port: 80
      targetPort: 80